Sevan Gregory Ficici

sevan AT cs.brandeis.edu
Post-Doctoral Fellow, Harvard UniversityPh.D, Brandeis UniversityComplex Systems Summer School, Santa Fe InstituteMA, Eastman School of Music

Academic Research • Industrial Work

My academic research focuses on artificial intelligence, machine learning, multi-agent systems, and game theory, with adjacent efforts in behavioral economics, robotics and evolutionary biology. My published research is cited in top academic conferences and journals (e.g., Nature, PNAS, JMLR, IJCAI, JAIR, RAS). Below, I highlight and briefly describe some of this research.

Modeling Human Behavior, Theories of Mind, Machine Learning, Multiagent Systems, Game Theory, Behavioral Economics

Today people interact with increasing frequency with computerized agents. Examples include online chatbots, voice-based assistants, and autonomous vehicle technology. Agents that are designed to interact with people must understand human behavior sufficiently well to act in ways that make sense to people. My research at Harvard focused on 1) collecting data on how humans behave in interesting strategic situations, 2) using that data to train probabilistic mixture models for predicting human behavior, and 3) embedding the trained models in computer agents so that they could interact successfully with people in similar situations. The models I investigated explored theories of mind (e.g., What is Alice thinking? or What does Bob think Alice is thinking?), and experimental results show that agents utilizing theory-of-mind models interact more successfully with people than agents that use simpler models.

This research was conducted at Harvard University and makes use of: Mixture models; expectation maximization; extensive human-subjects experiments; game design
I wrote all the code (machine learning, GUI, networking), with old-school hand-calculation of partial derivatives for gradient descent! No third-party ML libraries used.
Languages: Java, with heavy use of Java Swing for game UI and RMI to build a distributed, 37-host platform for running human subjects trials
Selected Publications

Ficici, S.G. and Pfeffer, A. (2008) Modeling how Humans Reason about Others with Partial Information, Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS).

Ficici, S.G. and Pfeffer, A. (2008) Simultaneously Modeling Humans' Preferences and their Beliefs about Others' Preferences, Seventh International Conference on Autonomous Agents and Multiagent Systems (AAMAS).



Computational Game Theory, Machine Learning, Multiagent Systems

Game theory is the mathematics of strategic interaction; it allows us to represent and reason about cooperative and adversarial situations. What's the "right" strategy for Rock Paper Scissors? Game theory will tell you; this is called "solving" the game. But, when a game becomes large, for example, due to having many players, computing a solution to the game can quickly become intractable.

To address this problem, computational game theory seeks to: 1) identify special classes of games that remain tractable to solve, and 2) devise approximation methods that can be applied to otherwise intractable games. Most work on approximation begins with an exact representation of the game and achieves tractability by computing an approximate solution. We instead introduce a novel method by which we produce a tractably small, approximate representation of the original game and obtain an exact solution to that approximation. Further, our approach is able to learn the approximate representation of the original game simply by observing agents play it.

This research was conducted at Harvard University and makes use of: Agent-based simulation; supervised and unsupervised learning
I wrote all of the simulation, learning, and result analysis code (third-party Gambit software was used to solve the game approximations we learned)
Languages: Java, Matlab
Selected Publication

Ficici, S.G., Parkes, D.C., and Pfeffer, A. (2008) Learning and Solving Many-Player Games through a Cluster-Based Representation, 24th Conference on Uncertainty in Artificial Intelligence (UAI).



Evolutionary Biology, Evolutionary Game Theory, Multiagent Systems

Surprisingly deep connections can be found between machine learning and theoretical biology. One such connection led me to discover a previously unknown property of natural selection while using evolutionary game theory to investigate strategy learning in a multiagent setting. Evolutionary game theory is a mathematical framework that shows how rational strategy choices can be obtained through a mindless process of natural selection, rather than agent deliberation. This framework is used to investigate questions of concern to biologists, like the expected distribution of traits, e.g., tall vs. short, in a population at fitness equilibrium (a population state where two or more traits under study confer equal fitness).

Nevertheless, evolutionary game theory assumes an infinite population, which does not hold in the real world. Through a combination of simulation and Markov chain analysis, I show that finite populations exhibit a second-order effect that makes them deviate from the exptected distribution of traits such that the traits no longer equilibrate fitness, but instead equilibrate the selection pressures acting on them. For example, say a population is at fitness equilibrium when it contains an equal number of tall and short individuals. An excess of tall individuals may result in a stronger selection pressure towards the short trait than an identical excess of short individuals produces towards the tall trait; this produces an expected trait distribution of more short individuals, rather than an equal number of each trait.

This research was begun at Brandeis University and makes use of: Agent-based simulation, dynamical systems theory, Markov chain analysis, detailed balance
I wrote all of the simulation and analysis code
Languages: Java, Matlab, C

Selected Publication

Ficici, S.G. and Pollack, J.B. (2007) Evolutionary Dynamics of Finite Populations in Games with Polymorphic Fitness-Equilibria, Journal of Theoretical Biology, 247(3): 426-441.



Robotics, Machine Learning, Multiagent Systems

Figuring out how to control a robot is often much more difficult than building the robot itself. Hand-engineering a control program is typically intractable, so some form of machine learning is used. Because machine learning requires many attempts at controlling the robotic hardware, execution on the actual robot, in real time, can be prohibitively slow. Consequently, learning is frequently done in simulation, which can execute faster than real time.

But simulation introduces its own problems: If the simulation doesn't capture salient aspects of the real world with sufficient fidelity, then machine learning may construct a control program that exploits properties of the simulated world that don't exist in the real world, and the control program will fail to transfer to reality -- the robot's behavior will be maladapted.

Our work presents an innovative learning method that eliminates simulation and instead parallelizes learning on a population of real-world robots to achieve speed-up in learning. Our results show success even with our custom hand-assembled, low-precision robotic hardware: learning finds a control program that is robust to individual differences between the robots.

This research was conducted at Brandeis University and makes use of: Custom, hand-assembled robots with plastic food-container bodies, and hand-built experiment platform from which robots draw power; distributed evolutionary learning
Selected Publication

Watson, R.A., Ficici, S.G., and Pollack, J.B. (2002) Embodied Evolution: Distributing an Evolutionary Algorithm in a Population of Robots, Robotics and Autonomous Systems, 39(1): 1-18.