In today's explosion of AI models, we can witness the birth of countless large language models in the technology circle every day. The speed of their creation is truly staggering.
For users and developers who are enthusiastic about experiencing, debugging, and evaluating various large models, it is cumbersome to open multiple tabs in the browser, visit different websites, and then chat with AI chatbots to get results.
In order to make it more convenient for everyone to experience various large models and to obtain AI-generated answer content more efficiently and conveniently, a developer has open-sourced a browser called GodMode on GitHub, which has been consistently popular on GitHub.
In just a few days, the project has gained more than 2500 GitHub Stars.
This browser is mainly aimed at AI users and supports various shortcut options, allowing users to ask questions to multiple AI chatbots simultaneously without switching browser tabs, and compare the generated answers of different AIs in real time.
In simple terms, it is one question with multiple answers.
GitHub: https://github.com/smol-ai/GodMode/
The browser supports multiple mainstream AI models such as ChatGPT, Claude2, Bing, Bard, Llama2, and HuggingChat, and is suitable for various application scenarios.
For example, you can throw your code to different models at once, let them debug your code synchronously online, and compare the answers to find the most reliable solution.
When you are searching for information or verifying facts, you can also cross-compare the results returned by different models to use for divergent thinking or judge the accuracy of their content.
The message sending process can be represented by the following flowchart:
This browser supports mainstream operating systems such as Mac, Linux, and Windows. At this stage, it is most compatible with Mac (ARM64 architecture), and the project is still being continuously updated on GitHub. Compatibility and optimization for Linux and Windows systems are ongoing.
There are two main ways to install this browser: downloading the installation package or compiling it yourself.
The simplest way is to directly choose the binary file for download and installation on the Release package page of the GitHub project.
Method 1: Download the installation package
On the Release page, find the latest version of the project, locate the Assets section, and choose the specified installation package to download based on the operating system you are currently using.
Address: https://github.com/smol-ai/GodMode/releases/
Method 2: Compile it yourself
- Clone the project and navigate to the specified folder:
git clone https://github.com/smol-ai/GodMode.gitcd GodMode
- Install and run using NPM:
Please note that if you are using a Windows system, you may need to install the third-party library Squirrel first. This library allows the software to implement automatic updates. The installation command is as follows:
npm install electron-squirrel-startup
- After all the development environment is ready, you can use NPM to install project dependencies and run the browser in development mode locally:
npm install --forcenpm run start
The generated binary file can be found in the /release/build folder.
With this, we have completed the entire compilation process of the GodMode project.
In conclusion, in the current era of AI large models emerging one after another, having such a browser undoubtedly saves a lot of work for our usual application development and debugging.
When we use AI to verify ideas and generate code, by cross-comparing the answers of different models, we can minimize errors.
A few months ago, when GPT-4 was just released, it could be called the most powerful large language model. But with the subsequent release of Claude2 and Llama2, many developers also understand that in the current AI technology circle, large models are not the best, but they can always be improved.
The landscape of the AI industry is constantly changing every day, and the iteration speed and quality of newcomers are no less than that of GPT-4.
As a developer and technology enthusiast, in my opinion, it is more beneficial than harmful to use multiple large models for synchronous comparison when experiencing various AI functions.