hellork commited on
Commit
74d8c5c
1 Parent(s): 3a2f2a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -67,16 +67,14 @@ GGML_CUDA=1 make -j # assuming CUDA is available. see docs
67
  ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
68
  whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
69
 
70
- # -ngl option assumes CUDA is available. see docs
71
  llama-server --hf-repo hellork/calme-2.1-qwen2-7b-IQ4_NL-GGUF --hf-file calme-2.1-qwen2-7b-iq4_nl-imat.gguf -c 2048 -ngl 17 --port 8888
72
 
73
  cd whisper_dictation
74
  ./whisper_cpp_client.py
75
  ```
76
 
77
- See [the docs](https://github.com/themanyone/whisper_dictation) for tips on enabling the computer to talk back, draw AI images, carry out voice commands, and other features.
78
-
79
- Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
80
 
81
  Step 1: Clone llama.cpp from GitHub.
82
  ```
 
67
  ln -s server ~/.local/bin/whisper_cpp_server # (just put it somewhere in $PATH)
68
  whisper_cpp_server -l en -m models/ggml-tiny.en.bin --port 7777
69
 
70
+ # -ngl option assumes CUDA or othr AI acceleration is available. see docs
71
  llama-server --hf-repo hellork/calme-2.1-qwen2-7b-IQ4_NL-GGUF --hf-file calme-2.1-qwen2-7b-iq4_nl-imat.gguf -c 2048 -ngl 17 --port 8888
72
 
73
  cd whisper_dictation
74
  ./whisper_cpp_client.py
75
  ```
76
 
77
+ ### Install llama.cpp via git:
 
 
78
 
79
  Step 1: Clone llama.cpp from GitHub.
80
  ```