Phind's latest V7 version of the large model claims to have surpassed GPT-4 in terms of encoding capabilities, while also having similar speed and 16k context like GPT-3.5, and it supports Chinese!
No registration required, you can test large models like GPT-4 for free online!
Experience address: https://www.phind.com/ (requires magic internet access)
Phind Large Model Introduction
Phind V7 is built based on Phind's open-source code model CodeLlama-34B V2, and has been fine-tuned through 70 billion high-quality code and inference questions.
CodeLlama-34B V2 ranks first in Hugging Face's code large model leaderboard, making it the first open-source code project to defeat GPT-4.
Its code capability far exceeds GPT-4, with a score of 74.7% on HumanEval.
In addition, Phind has achieved a processing speed 5 times faster than GPT-4, processing 100 tokens per second.
Another key advantage is that Phind supports up to 16k tokens of context.
Among them, up to 12k tokens are available for user input, and the remaining 4k tokens are used for network results.
When opening the webpage, Phind has a simple and intuitive interface, and can be used without logging in. There are self-developed models and GPT-4 to choose from.
The "best model" is GPT-4, which can only be used 10 times a day.
At the bottom of the dialog box, there is a "Pair Programmer" switch. When turned on, Phind's answers will be more focused on the conversation.
Phind V7 is proficient in mainstream programming languages such as Python, C/C++, TypeScript, Java, etc. Just enter your programming question and it will provide code.
You can use Chinese for conversation, and the comments returned by Phind are also in Chinese. If you click the triangle button, you can directly run it in replit, which is very convenient.