AI researchers left Google to launch Sakana AI, a generative AI startup developing adaptable AI models based on nature-inspired intelligence
Llion Jones, who was hailed as the “Godfather of Transformers,” dedicated nearly 12 years to Google. He played a key role in crafting the pivotal Transformers research paper, a cornerstone in contemporary generative AI.
Jones was part of a team of eight Google researchers who collaborated on developing the transformer software, which underpinned today’s generative AI tools including chatbots like ChatGPT and Bard, along with image generators such as Stability AI, Midjourney, and Dall-E.
Yet, just like his co-authors, Jones left Google to start his own AI company with a fellow ex-Google researcher. Drawing inspiration from nature, Jones partnered with former Google colleague David Ha to launch a generative AI research lab in Tokyo called Sakana AI.
Meanwhile, since the Transformers research paper made its debut in June 2017, all co-authors have departed from Google, embarking on their own startup ventures, driven by the intensifying global demand for generative AI expertise.
The co-founders also explained the meaning behind their company name. The name “Sakana,” originating from the Japanese word さかな (sa-ka-na) for fish, symbolizes the concept of “a school of fish coming together and forming a coherent entity from simple rules.” Their inspiration draws from natural principles like evolution and collective intelligence.
Before leaving Google, Ha held the position of leading the tech giant’s AI research division in Japan. Ha, who is Sakana’s CEO, recently spearheaded research at image AI startup company Stability AI.
Regarding his departure from Google, Jones clarified that he holds no animosity toward the tech giant. However, he recognized that the company’s vast scale hindered his ability to engage in the specific work he aspired to undertake.
“It’s just a side effect of big company-itis,” Jones told CNBC in an interview. “I think the bureaucracy had built to the point where I just felt like I couldn’t get anything done.”
In his view, who is now Sakana’s CTO, thinks Google is focusing “the entire company around this one technology,” and innovation is more challenging “because that’s quite a restrictive framework,” he explained.
Ha also added that he and Jones have engaged in discussions with like-minded individuals interested in LLMs (large language models), although their plans remain in the process of being defined. “I would be surprised if language models were not part of the future,” said Ha.
Sakana AI is charting its course to develop an independent generative AI model capable of generating diverse content, including text, images, code, and multimedia. This endeavor enters a competitive arena populated by major AI players like Google, Microsoft, and OpenAI, along with well-funded startups including Cohere, Character.ai, and Anthropic. Microsoft’s substantial $10 billion investment in OpenAI this year, coupled with valuations of $2 billion for Cohere and $1 billion for Character.ai, underscores the robust financial backing in this sector.
Jones and Ha assert that current AI models face limitations due to their rigid and inflexible design, resembling fixed structures such as bridges or buildings.
“We are building a world-class AI research lab in Tokyo, Japan. We are creating a new kind of foundation model based on nature-inspired intelligence,” the startup said on its homepage.
Meanwhile, Both Jones and Ha express unfavorable opinions about OpenAI, the startup credited with popularizing the notion of generative AI while amassing significant funding from Microsoft and other investors. Ha characterizes OpenAI as having grown “quite large and a bit bureaucratic,” drawing parallels with certain groups within Google.
Jones maintains that OpenAI’s level of innovation is debatable. He suggests that OpenAI’s major achievements, like ChatGPT and the DALL-E image creation service, are rooted in research he conducted at Google. OpenAI then took this research to a larger scale, making enhancements along the way but opting to withhold these developments from the community. Despite not releasing these technologies under an open-source license, OpenAI has shared research papers that provide insights into the underlying system.