Hanjie's Blog

一只有理想的羊驼

部署平台
MacBook Pro M1 Max 32GB
macOS Sonoma 14.7.2

Ollama

Ollama官方网站中下载Ollama,然后打开按指示安装。

ollama_download

安装后打开终端,测试是否正常运行:

1
2
❯ ollama --version                                                                            
ollama version is 0.5.7

然后启动服务:

1
ollama serve
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
2025/02/05 19:07:01 routes.go:1187: INFO server config env="map[HTTPS_PROXY: HTTP_PROXY: NO_PROXY: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_GPU_OVERHEAD:0 OLLAMA_HOST:http://localhost:8080 OLLAMA_KEEP_ALIVE:5m0s OLLAMA_KV_CACHE_TYPE: OLLAMA_LLM_LIBRARY: OLLAMA_LOAD_TIMEOUT:5m0s OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MODELS:/Users/luohanjie/.ollama/models OLLAMA_MULTIUSER_CACHE:false OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://* vscode-webview://*] OLLAMA_SCHED_SPREAD:false http_proxy: https_proxy: no_proxy:]"
time=2025-02-05T19:07:01.625+08:00 level=INFO source=images.go:432 msg="total blobs: 0"
time=2025-02-05T19:07:01.625+08:00 level=INFO source=images.go:439 msg="total unused blobs removed: 0"
[GIN-debug] [WARNING] Creating an Engine instance with the Logger and Recovery middleware already attached.

[GIN-debug] [WARNING] Running in "debug" mode. Switch to "release" mode in production.
- using env: export GIN_MODE=release
- using code: gin.SetMode(gin.ReleaseMode)

[GIN-debug] POST /api/pull --> github.com/ollama/ollama/server.(*Server).PullHandler-fm (5 handlers)
[GIN-debug] POST /api/generate --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (5 handlers)
[GIN-debug] POST /api/chat --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (5 handlers)
[GIN-debug] POST /api/embed --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (5 handlers)
[GIN-debug] POST /api/embeddings --> github.com/ollama/ollama/server.(*Server).EmbeddingsHandler-fm (5 handlers)
[GIN-debug] POST /api/create --> github.com/ollama/ollama/server.(*Server).CreateHandler-fm (5 handlers)
[GIN-debug] POST /api/push --> github.com/ollama/ollama/server.(*Server).PushHandler-fm (5 handlers)
[GIN-debug] POST /api/copy --> github.com/ollama/ollama/server.(*Server).CopyHandler-fm (5 handlers)
[GIN-debug] DELETE /api/delete --> github.com/ollama/ollama/server.(*Server).DeleteHandler-fm (5 handlers)
[GIN-debug] POST /api/show --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (5 handlers)
[GIN-debug] POST /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).CreateBlobHandler-fm (5 handlers)
[GIN-debug] HEAD /api/blobs/:digest --> github.com/ollama/ollama/server.(*Server).HeadBlobHandler-fm (5 handlers)
[GIN-debug] GET /api/ps --> github.com/ollama/ollama/server.(*Server).PsHandler-fm (5 handlers)
[GIN-debug] POST /v1/chat/completions --> github.com/ollama/ollama/server.(*Server).ChatHandler-fm (6 handlers)
[GIN-debug] POST /v1/completions --> github.com/ollama/ollama/server.(*Server).GenerateHandler-fm (6 handlers)
[GIN-debug] POST /v1/embeddings --> github.com/ollama/ollama/server.(*Server).EmbedHandler-fm (6 handlers)
[GIN-debug] GET /v1/models --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (6 handlers)
[GIN-debug] GET /v1/models/:model --> github.com/ollama/ollama/server.(*Server).ShowHandler-fm (6 handlers)
[GIN-debug] GET / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] GET /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] GET /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
[GIN-debug] HEAD / --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func1 (5 handlers)
[GIN-debug] HEAD /api/tags --> github.com/ollama/ollama/server.(*Server).ListHandler-fm (5 handlers)
[GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers)
time=2025-02-05T19:07:01.625+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:8080 (version 0.5.7)"
time=2025-02-05T19:07:01.625+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners=[metal]
time=2025-02-05T19:07:01.652+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=metal variant="" compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB"

Deepseek

Ollama Models网页中,可以查看到支持的Deepseek模型,根据显存大小选择所用的模型:

模型 显存大小(GB)
1.5b 2
7b、8b 4~8
14b 12
32b 24

这里我们选择使用32b模型,在终端输入下面指令进行模型下载:

1
ollama pull deepseek-r1:32b 

下载完成后,即可在终端进行对话:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
pulling manifest 
pulling 6150cb382311... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 19 GB
pulling 369ca498f347... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 387 B
pulling 6e4c38e1172f... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 148 B
pulling c7f3ea903b50... 100% ▕█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 488 B
verifying sha256 digest
writing manifest
success
>>> 你好
<think>

</think>

你好!很高兴见到你,有什么我可以帮忙的吗?无论是学习、工作还是生活中的问题,都可以告诉我哦! 😊

输入\bye退出对话。

VSCode Continue

在VSCode的EXTENSIONS中查询Continue插件并且安装。

Continue

然后在VSCode左侧栏会新加一个Continue的按钮,点击进去。在上对话框点击CLaude 3.5 Sonnet旁边ˇ,选择Add Chat model

Continue2

弹出窗口中,Provide选择Ollama,Model选择Autodetect(注意打开ollama的app),点击Connect

Continue3

重新点击CLaude 3.5 Sonnet旁边ˇ按钮,选择Autodetect - deepseek-r1:32b

Continue4

然后就可以在对话框中进行对话了:

Continue5

使用

在VSCode中选择代码,鼠标右键选择continue中对应的功能,即可使用代码助手。比如自动代码代码注释:

Continue6

代码优化:

Continue7

自动修复代码:

Continue8

Uninstall old versions

1
for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done

Set up Docker’s apt repository.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo apt-get install libnss3
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$UBUNTU_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

Install the Docker packages.

1
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
sudo docker version

Client: Docker Engine - Community
Version: 27.4.1
API version: 1.47
Go version: go1.22.10
Git commit: b9d17ea
Built: Tue Dec 17 15:45:42 2024
OS/Arch: linux/amd64
Context: default

Server: Docker Engine - Community
Engine:
Version: 27.4.1
API version: 1.47 (minimum version 1.24)
Go version: go1.22.10
Git commit: c710b88
Built: Tue Dec 17 15:45:42 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.7.24
GitCommit: 88bf19b2105c8b17560993bee28a01ddc2f97182
runc:
Version: 1.2.2
GitCommit: v1.2.2-0-g7cb3632
docker-init:
Version: 0.19.0
GitCommit: de40ad0

Install Docker Desktop

Download the latest DEB package

1
sudo apt-get install ./docker-desktop-amd64.deb

Upgrade Docker Desktop

1
sudo apt-get install ./docker-desktop-<arch>.deb

Docker Desktop 换源

打开 Docker Desktop,在 Settings > Docker Engine 中,添加如下内容:

1
2
3
4
"registry-mirrors": [
"https://alzgoonw.mirror.aliyuncs.com",
"https://docker.m.daocloud.io"
]

或者:

1
sudo nano  /etc/docker/daemon.json

添加:

1
2
3
{
"registry-mirrors":["https://alzgoonw.mirror.aliyuncs.com", "https://docker.m.daocloud.io"]
}

重启:

1
2
sudo systemctl daemon-reload
sudo systemctl restart docker

修改DNS服务配置

1
sudo nano /etc/resolv.conf

添加:

1
nameserver 8.8.8.8

在计算机视觉系统中, 物体定位跟踪是其中一种重要功能,普遍应用于增强现实,虚拟现实等应用中。用户给定一个或多个待跟踪的模版图,系统会通过摄像头获取当前的环境图像作为目标图,然后在目标图中找到出现的模版,并输出模版的空间位置。传统的定位方法中,会使用基于特征点的图像匹配算法,虽然这类算法无需先验知识,但是运算量大,定位精度,成功率低。而KLT光流法,ESM高效二阶最小化方法等连续跟踪方法,定位精度高,但是需要一个初始的搜索位置,如果初始值与真值相隔太远,则算法无法收敛,导致定位失败。

我们设计了一个Image Tracker平面物体定位系统,利结合了以上两种方法,使用基于特征点匹配的图像定位方法进行粗定位,然后执行ESM高效二阶最小化方法进行精细定位。在精细定位前,还会使用运动预测法,KLT光流法对物体位置进行预测,为ESM算法提供一个更加接近真值的初始搜索位置。而且在本系统中,会将计算量大的基于特征点匹配的图像定位方法异步运行,从而保证了系统的实时性。

0%