|
@@ -0,0 +1,20 @@
|
|
|
+# Run TAIDE RAG in VM
|
|
|
+
|
|
|
+## Prerequisites
|
|
|
+- Docker and Docker Compose
|
|
|
+- Ollama (for creating the taide-local model)
|
|
|
+- Download TAIDE .gguf file. https://huggingface.co/taide/TAIDE-LX-7B-Chat-4bit/tree/main?show_file_info=taide-7b-a.2-q4_k_m.gguf Update the file path in `Modelfile`
|
|
|
+
|
|
|
+## Setting up taide-local
|
|
|
+
|
|
|
+1. Install Ollama on your VM
|
|
|
+`curl -fsSL https://ollama.com/install.sh | sh`
|
|
|
+2. Create the taide-local model
|
|
|
+`ollama create taide-local -f Modelfile`
|
|
|
+
|
|
|
+## Running the Applciation
|
|
|
+1. Clone this repository.
|
|
|
+`git clone -b public https://github.com/yourusername/your-repo.git`
|
|
|
+2. Create a `.env` file in the project root with your API keys.
|
|
|
+3. Run `docker-compose up --build`
|
|
|
+4. The application will be available at `http://localhost:8000`.
|