Humans learn concepts both from labeled supervision and by unsupervised observation of patterns, a process machines are being taught to mimic by training on large annotated datasets—a method quite different from the human pathway, wherein few examples with no supervision suffice to induce an unfamiliar relational concept. We introduce a computational model designed to emulate human inductive reasoning on abstract reasoning tasks, such as those in IQ tests, using a minimax entropy approach. This method combines identifying the most effective constraints on data via minimum entropy with determining the best combination of them via maximum entropy. Our model, which applies this unsupervised technique, induces concepts from just one instance, reaching human-level performance on tasks of Raven’s Progressive Matrices (RPM), Machine Number Sense (MNS), and Odd-One-Out (O3). These results demonstrate the potential of minimax entropy learning for enabling machines to learn relational concepts efficiently with minimal input.