With the expanded reach of Artificial Intelligence (AI) comes also the increased demand for human control. Maintaining effective control over such systems remains a challenge and has sparked a global effort to promote socio-technical solutions including governance and control frameworks. In this talk, I discuss the idea of human control, and how the development of systems with variable autonomy, a conceptual framework, can provide the means of ensuring meaningful human control by satisfying core values advocated in AI governance documents. I further discuss how transparency can be achieved in the context of Multi-Agent Systems, where social interactions between agents result in the emergence of behavior that appears obscure to the outside observer, also limiting the ability to maintain meaningful control. I showcase active research, using video games, to demonstrate how combining behavior-based and logic-based AI methods can prompt individual agents to reactively orient their priorities away from selfish goals to enhance performance as well as to ensure transparency and, consequently, human control.