banner
andrewji8

Being towards death

Heed not to the tree-rustling and leaf-lashing rain, Why not stroll along, whistle and sing under its rein. Lighter and better suited than horses are straw sandals and a bamboo staff, Who's afraid? A palm-leaf plaited cape provides enough to misty weather in life sustain. A thorny spring breeze sobers up the spirit, I feel a slight chill, The setting sun over the mountain offers greetings still. Looking back over the bleak passage survived, The return in time Shall not be affected by windswept rain or shine.
telegram
twitter
github

Complete Guide to Cloud Deployment and API Calls for the Deepseek-R1 Model!

Tencent's CodeStudio Deploys the Deepseek-R1 Large Model with Ollama#

Tencent's CodeStudio can actually deploy the Deepseek-R1 large model using Ollama. The key is the 10,000 minutes quota per month, which is practically free! Friends with ordinary computers can register for Tencent Cloud to personally experience the joy of deploying a large model.

Preparation Work#

Ollama is a free open-source tool that allows users to run large language models (LLMs) locally on their computers. It is compatible with macOS, Linux, and Windows. In simple terms: it is a handy assistant that allows you to install, deploy, and run large models with just one command.

Official Address: Ollama Official Website

If you want to deploy a large model on your own computer, this tool is a must-have!

Steps for Cloud Deployment#

Without further ado, let's get started!

image

image

  1. Visit Tencent Cloud: Tencent Cloud IDE

  2. Select Space Template: Choose the "ollama" template in the "Space Template" section. It will prompt you with 10,000 minutes of free time each month, which is 166.7 hours.

image
. Ollama List Appears: At this point, an Ollama list will appear.
4. Open Terminal: A VSCode dialog box will appear; follow the steps in the screenshot to open a terminal area.
5. Input Command: Directly enter the code:

ollama run deepseek-r1

Press the "Enter" key to execute. It will pull the image from the remote repository, and by default, it will download the cloud large model with 7B parameters.

image

In no time, it will be installed, and a >>> prompt will appear, allowing us to converse with Deepseek-R1.

image

Find the port where the Ollama service is running
It is found to be 6399. Since CodeStudio comes with a VSCode code development window, we can also use Tencent's built-in "AI Code Assistant" to generate a program that calls "deepseek-r1".

image

Specific Command
It can be written like this:

curl http://0.0.0.0:6399/api/chat -d '{
  "model": "deepseek-r1",
  "messages": [
    { "role": "user", "content": "Hello" }
  ]
}'

image

Then you can easily call the Deepseek API, making development and debugging convenient.
Note
Although 10,000 minutes are given each month, running 24 hours a day will only last for a little over a week. If you run out, you can return to the homepage and simply click to shut down.

Loading...
Ownership of this post data is guaranteed by blockchain and smart contracts to the creator alone.