Estimated reading time: 7 minutes
Artificial Intelligence (AI) is rapidly evolving, offering tools for creativity, analysis, and moral decision-making. Yet, despite its capabilities, AI is, at its core, a reflection of its creators’ intentions—mankind’s attempt to build machines that mimic human behavior and intelligence. One such example is Google’s AI Studio, a tool that allows users to interact with data, generating outputs that range from factual severe analysis to more creative expressions.
This capability—playful and profound—raises essential questions about the development and deployment of AI tools, who should control these decisions, and why the general public—the very individuals most impacted by these technologies—has so little involvement or understanding of how these tools operate.
The Development Dilemma: Profit-Driven vs. Publicly Controlled AI
Who should decide how AI is built and deployed? This question sits at the heart of ongoing debates about the role of AI in society. On one hand, we have for-profit companies—large tech giants pouring billions into AI research and development. Their incentive, of course, is clear: profit. By creating AI tools that are effective, user-friendly, and appealing, these companies secure their place in the market, sometimes to the detriment of transparency, ethical considerations, and the public good.
On the other hand, there is a growing movement for open-source AI—publicly controlled and collaboratively built AI systems accessible to everyone. Advocates for open-source AI argue that this approach could lead to a more inclusive development of technology, where diverse voices are involved in shaping AI’s direction. This is particularly important as much of the training data used for AI comes from the general public, who currently have minimal control over how their contributions are used or how the resulting technology is developed. A thoughtful combination of private innovation and public participation could help ensure that AI evolves equity and benefits society.
Implications of Excluding the Public
When the public is excluded from developing AI tools, it opens the door to severe challenges and risks.
Without transparency, there is limited public understanding of how AI decisions are made, what data these models are trained on, and what biases they might inherit from their creators. For instance, the temperature setting in AI Studio lets users adjust how “creative” or “deterministic” the AI output should be—a powerful capability that allows a spectrum of uses, from imaginative storytelling to generating factual responses. But with power comes responsibility. How do we ensure that this power is not misused?
Without public participation, AI risks being used in harmful or manipulative ways. AI-driven misinformation campaigns, deepfakes, or biased decision-making systems that affect job hiring, medical diagnoses, or criminal justice decisions are all potential outcomes when a few entities control the development of AI without broader oversight.
The truth is that we are, in many ways, “flying by the seat of our pants” when it comes to AI. Development is moving faster than regulatory frameworks can keep up, and the general public is mostly left in the dark about how these technologies work and the extent of their impact. This isn’t just concerning—it’s dangerous. And yet, it’s also quintessentially human to push boundaries, to venture into the unknown, to “go boldly” into the future.
The Human Element: Balancing Creativity and Responsibility
AI offers a unique kind of power—one that can create poems, analyze data, write code, and even diagnose illnesses. The temperature slider in AI Studio metaphorically represents the broader challenge of AI development: at a low temperature, AI is predictable, factual, and deterministic; at a high temperature, it becomes unpredictable, creative, and exploratory.
Just as this slider can be adjusted, we must ask ourselves: Who is adjusting the temperature of AI development? Is it right for a few individuals within a tech company to control the direction and ethics of a tool that could impact billions? Shouldn’t the public have more say in the temperature setting—how bold or restrained should AI’s influence be?
This lack of public inclusion is perhaps most visible in AI’s challenges to ethical considerations. Human capacities such as empathy, moral responsibility, and intuition are still beyond AI’s grasp. AI cannot feel guilt or joy, cannot “care” about the implications of its actions, and cannot engage in the kind of moral reasoning that comes naturally to human beings. And yet, these are precisely the qualities needed when deciding how AI should be used.
“Flying By the Seat of Our Pants”: A Double-Edged Sword
One could argue that humans have always moved forward by “flying by the seat of our pants”—experimenting, adapting, and learning as we go. But in the context of AI, this approach has significant consequences. The speed at which AI is being developed and deployed leaves little room for reflection, understanding its broader implications, or putting in place safeguards that could prevent harm.
Yet, it’s also true that this boldness has driven human progress. It’s what led to the discovery of electricity, the invention of the airplane, and the creation of the internet. The challenge, then, is not to stop the development of AI but to ensure that as we move forward, we do so with our eyes wide open—aware of the risks, conscious of the potential for misuse, and committed to making sure that the benefits of AI are shared equitably.
Who Should Hold the Reins?
The question of who should control AI development—for-profit companies or publicly controlled entities—is a complex one. For-profit companies have the resources, talent, and incentive to push the boundaries of what AI can do. However, they are also responsible for ensuring that their work does not cause harm, that their algorithms are fair, transparent, and accountable, and that they are not simply creating AI tools that serve their bottom line at the expense of the public good.
Publicly controlled AI, on the other hand, offers a more democratic approach. By involving the public in AI development, we can ensure that a broader range of voices and perspectives are heard, that the benefits of AI are shared more broadly, and that the potential harms are mitigated. This approach would require greater transparency, more public education about AI, and a commitment to building tools that serve the needs of all people, not just the privileged few.
A Call for Inclusion, Transparency, and Accountability
To move forward responsibly, we need to involve the public in the conversation about AI more meaningfully. This means not just providing people with a “prompt line” to play with AI tools like AI Studio but genuinely educating them about how these tools work, what their limitations are, and how they can be used ethically.
We need transparency from the companies and organizations developing AI—openness about the data they use, any biases they may have, and the potential risks associated with their technologies. We also need accountability—clear guidelines about who is responsible when things go wrong and mechanisms for addressing harm when it occurs.
Perhaps most importantly, we need to recognize that while AI is a powerful tool, it is still a tool. It lacks the uniquely human qualities of empathy, intuition, and moral judgment that are necessary for making decisions that affect people’s lives. As we continue developing and deploying AI, we must ensure that these human qualities remain central to our decision-making processes.
In the end, “flying by the seat of our pants” is a part of what makes us human. But when it comes to AI, we must ensure that we are not just flying but steering. We must chart a course guided by our values, ethics, and commitment to the common good. Only by doing so can we ensure that AI serves humanity rather than vice versa.
Other Reading:
- - - - - - - - - - - - - - - - - - - - -