X
Innovation

Why open source is essential to allaying AI fears, according to Stability.ai founder

Meet Mr. anti-OpenAI. "There is no way you can use black-box models" for the world's most valuable data, insists Stability.ai founder Emad Mostaque.
Written by Tiernan Ray, Senior Contributing Writer
Stable Diffusion images of a face
Tiernan Ray via Stable Diffusion/ZDNET

Twiddling the knobs at Stability.ai's website can be an addictive past-time for an hour or so. Using the DreamStudio software program made by the four-year-old British startup, one can create slick illustrations just by typing a phrase such as, "The highly diverse ZDNET authors seen through the windows of their stellar cruiser on their way to Ceti Alpha V."

Playing with the language -- prompt engineering -- one can add different scenarios, such as "The authors of ZDNET are an intergalactic force of half-human, half-panda warrior superheroes who wear a giant Z on the front of their costumes."

Also: The best AI art generators to try

Or, one can morph an existing photo, such as the headshot of Stability.ai's founder and CEO, Emad Mostaque, until his features turn to clay or shards of glass, a process akin to Photoshop filters on steroids.

The DreamStudio software, which burst on the scene a year ago, is one of the recent crop of "generative" artificial intelligence programs, similar to OpenAI's ChatGPT

But Mostaque is establishing himself as the anti-OpenAI. His contention is that programs such as ChatGPT and DreamStudio are so important to the future of humanity, that the world -- and especially the business community -- will demand to know how the programs work if we are to trust the programs with our sensitive data.

Also: How to use Stable Diffusion AI to create amazing images

"Open models will be essential for private data," said Mostaque during a small meeting of press and executives via Zoom last month. "You need to know everything that's inside it; these models are so powerful." 

That's important, he contends, because "a lot of people are realizing that most of the valuable data in the world is private data, regulated data," said Mostaque. "There is no way that you can use black-box models for your health chatbots, or your education, or in financial services, whereas, an open model, with an open-source base, but with licensed variant data, and then the company's private data, is really important."

Also: ChatGPT's success could prompt a damaging swing to secrecy in AI

Mostaque's business plan can be summarized as, "I can be the leader of open even as everyone else does closed."

People sitting together in Stable Diffusion image

Image created in Stability.ai's DreamStudio using the prompt, "The highly diverse ZDNET authors seen through the windows of their stellar cruiser on their way to Ceti Alpha V."

Tiernan Ray via Stable Diffusion/ZDNET

By "closed," Mostaque was alluding to the decision in March by OpenAI not to disclose any technical details about its latest generative AI program, the large language model called GPT-4. Some scholars of AI have warned the move could have a chilling effect on research, and that lack of disclosure has enormous moral implications.

Stability.ai is one of a number of parties, both commercial and academic, that have responded to OpenAI's lack of disclosure by creating alternatives. Some are dedicated to openness per se. Others believe open-source software will bring greater efficiency to tame the enormous compute budget that large language models bring. 

Also: How to use ChatGPT to write code

Mostaque, a former hedge fund manager, sees a great business opportunity, "a very large arbitrage opportunity," as he puts it, to "minimize the maximum regret" of businesses, in actuarial terms.

The open-source world of engineering and science, he contends, can allay businesses' fears about AI, especially the many publicized issues with ChatGPT and its ilk. That includes -- but is not limited to -- "hallucinations," when programs give the wrong answer; bias; unethical output; and copyright infringement.

Image of panda using Stable Diffusion

Image created with Stability.ai's DreamStudio with the prompt, "The authors of ZDNET are an intergalactic force of half-human, half-panda warrior superheroes who wear a giant Z on the front of their costumes."

Tiernan Ray via Stable Diffusion/ZDNET

As Mostaque sees the science-business partnerships, open-source software will produce "a benchmark model for every modality, based on open data, from the commons to the commons, and then for every sector, commercially licensed bearings where you know every single thing that's in there," meaning, in the program and its training data. 

The term "modality" refers to which media kind of data, such as text, image, sound. Mostaque's vision is all modalities will be enabled by open-source AI programs, not just the natural language kind that are all the rage.

Also: This new technology could blow away GPT-4 and everything like it

Stability.ai's efforts are part of an emerging consensus that many institutions should step into the breach with code where outfits such as OpenAI go dark.

Some groups have simply built upon the earlier releases of OpenAI's GPT, such as an effort unveiled in March by AI hardware maker Cerebras Systems, which released as open source its own trained versions of the GPT programs. 

But there is also a kind of collaborative ecosystem brewing.

Also: How to use Midjourney to generate any image you can imagine

Facebook owner Meta's AI group in February released the open-source LLaMA for natural language processing, which subsequently was built upon by researchers at Stanford University to create Alpaca. Then, a joint team from UC Berkeley, Carnegie-Mellon, Stanford, UC San Diego, and the Mohamed bin Zayed University of Artificial Intelligence in Abu Dhabi, built on LLaMA to create yet another program, called Vicuna.

Last week, Mostaque's company released an open-source large language model called Stable Vicuna, based on the Vicuna program. (A vicuña is a South American mammal, a nod to a long tradition of animal names in open-source programs.)

Also: Generative AI is changing your technology career path. What to know

Mostaque has been following this collaborative route for the past few years with various institutions. The technology on which DreamStudio is based, called stable diffusion, is a parallel to OpenAI's GPT. It allows the generation of an image based on strings of words typed by the user. 

Stable diffusion was developed by Stability.ai in partnership with researchers at the Computer Vision & Learning research group at the Ludwig Maximilian University of Munich, Germany, which published the original work on "latent diffusion."

The latent diffusion work, as described in last year's paper by Robin Rombach and colleagues at Ludwig Maximilian, sought to slim down the enormous compute budget of image generation, which is one of the most compute-intensive of all AI tasks. 

Also: ChatGPT is not innovative or revolutionary, says Meta's chief AI scientist

Stability.ai has also focused on economies of scale. The stable diffusion software, Mostaque points out, is "a hundred thousand gigabytes of images compressed to a two-gigabyte file."

By slimming down the compute budget, the technology of large AI models can be on every smartphone, Mostaque envisions, as a personal helpmate to every individual. 

"This is next-generation infrastructure," he said.

Mostaque was an invited speaker for a 90-minute talk hosted by the Collective[i] Forecast, an online, interactive discussion series that is organized by Collective[i], which bills itself as "an AI platform designed to optimize B2B sales."

Also: I used ChatGPT to write the same routine in these ten obscure programming languages

Mostaque started out his career at age 18 programming assembly language routines. "Kids today have it easy: half the code on GitHub is generated by AI," he observed. 

Mosque became inspired by artificial intelligence, he said, when his son was diagnosed with autism. "Everyone was, like, there's no cure, no information," he recalled. "We built an AI team, and we built a program to analyze all the literature [on autism], and then a pathway analysis model to evaluate potential causes, in order to identify drugs that could be repurposed for him with medical assistance.

Also: AI has caused a renaissance of tech industry R&D, says Meta's chief AI scientist

"He ended up going to a mainstream school, which I think is pretty cool," said Mostaque.

Now, Mostaque sees extending the benefits of AI to the rest of humanity with compact, efficient AI programs that can be widely distributed.

We are in the right place, ethically," he said, "in terms of bringing this technology to everyone by focusing not on AGI [artificial general intelligence] to replace humans, but how do we augment humans with small, nimble models."

Disclaimer: Using AI-generated images could lead to copyright violations, so people should be cautious if they're using the images for commercial purposes.  

Editorial standards