Many countries are investing in AI research in hopes of being the first country to report a breakthrough in achieving AGI. McClendon notes that although the US and China are at the forefront, Canada, the UK, France, and Germany are making strides in the race.
"It's important to note that achieving AGI will be a significant scientific and technological milestone, and its impact will be global, irrespective of where it's developed," he says.
Also: China is ramping up efforts to drive AI development
What matters about the country in which AI is researched are the laws in that country and the geopolitical reasons for which a government could leverage the power of AGI. A country's philosophies about regulation and technological innovation are integral to how fast and widely adopted its technology could be.
Bryan Cole, director of customer engineering at Tricentis, says companies like OpenAI, Google, and Microsoft may disclose their progress to AGI, but governments and nation-states may be less open.
The secrecy around who could reach AGI first lies in global dominance and influence, and whoever achieves it first could become more powerful without letting adversaries know their next step.
Also: OpenAI could 'cease operating' in EU countries due to AI regulations
"Major nation states like China and the US are pouring enormous resources into this because whoever gets an AGI system first will likely have the ability to prevent any other AGI system from coming into existence through technological dominion [or control]," he says.
But until countries reach AGI, AI companies, researchers, and lawmakers must collaborate to create legal safeguards for citizens.
Countries in the EU will soon have to adhere to the three risk categories outlined in the EU AI Act, which will ensure that AI is regulated by people -- not by automated systems -- to eliminate the possibility of harmful outcomes. China's regulations require AI systems to support and align with the country's political values, while the US has no legal framework at the federal level to regulate AI.
Also: The EU AI Act: What you need to know
Michael Queenan, founder and CEO of Nephos Technologies, says Western governments should not willfully ignore the vital importance -- and possible dangers -- of AI, or they risk losing the race.
"AI technology is evolving at breakneck speed, far faster than regulators are, and we need to act fast," he says. "We are facing a tsunami of AI and have no plan for it. The West is at risk of being left behind; however, regulation is critical in deciding what and how we should be using it."
As AI becomes more advanced and its applications span different facets of life, it becomes increasingly difficult for lawmakers to create laws that clearly define the risks and how to address them.
Sarah Pearce, partner at the law firm Hunton Andrews Kurth, says lawmakers shouldn't spend too much time coming up with an exact definition of AI and should contain their efforts to regulate the technology's output.
Also: 3 ways OpenAI says we should start to tackle AI regulation
"I think lawmakers would be better focusing on the outputs and uses of the technology when trying to legislate around it rather than trying to settle on an overly broad definition of what it is, as it is likely that any definition will be outdated by the time the legislation comes into force," she says.
For countries that don't have formal federal legislation surrounding AI, Pearce says governments and AI companies should focus on protecting user data. AI companies collect and use significant amounts of data to train AI models.
"Companies will inevitably be accused of taking their collection activities towards the excessive and may be asked to explain whether and why they are retaining the data for longer than may be perceived necessary," Pearce says. "Often, this is to help improve and further replicate algorithms -- it is not necessarily being used for additional commercial gain."
Also: US Chamber of Commerce pushes for AI regulation, warns it could disrupt economy
AGI, a technology dreamed of since the dawn of computers, shown to us in movies like Spike Jonze's "Her," Jon Favreau's "Iron Man," and Stanley Kubrick's "2001: A Space Oddysey," is the lifeblood of AI research, and it's impossible to stop anyone from trying to achieve it.
In our favorite movies, AGI is a helpful sidekick, a loving companion, or a villain that recognizes humans as a threat to themselves and decides to exterminate humankind. Which version will we see? Experiencing a generally intelligent agent is no longer a matter of if -- but when.
Will we be ready?