Arrr, me hearties! Avast ye, forsooth! Google's Gemini AI be now equipped to handle grander prompts, thanks to its next-gen upgrade.
2024-02-15
Arr, ye scurvy dogs! Gemini 1.5 Pro's newest MoE architecture be makin' it a grand sight better at handlin' vast tomes and grasp'n context than afore. Aye, 'tis a treasure worth seekin' on the high seas o' technology!
Google has released its next-generation AI model called Gemini 1.5, just two months after the launch of its initial Gemini AI. The new model boasts "dramatically enhanced performance" thanks to a Mixture-of-Experts architecture, which allows multiple AI models to work together. This structure makes Gemini easier to train and faster at learning complex tasks.The upgrade, currently available for early testing, is called Gemini 1.5 Pro. Its standout feature is a context window of up to 1 million tokens, surpassing other AI models like GPT-4 Turbo with a cap of 128,000 tokens. Google demonstrated Gemini 1.5 Pro's capabilities in videos, showing how it can analyze and summarize large amounts of text based on a prompt. In one example, it was able to locate comedic moments during the Apollo 11 moon mission, including who told the jokes and explained any references made.
The AI can also analyze other modalities, as shown in a demo where it accurately identified a scene involving a water tower in a Buster Keaton movie based on a rough sketch without any additional information or text. However, Gemini 1.5 Pro is currently only available as an early preview for developers and enterprise customers through Google's AI Studio and Vertex AI platforms.
Google has not provided a specific timeline for the wider release of Gemini 1.5 and future AI models. The company warns testers of potential long latency times since the model is still experimental. However, Google plans to improve speeds in the future.