5,670 training samples from MITRE ATT&CK, NVD/CVE, CISA KEV, Atomic Red Team, GitHub Security Advisories, pentest methodology, and SOC playbooks. Run on free GPU infrastructure in under 60 minutes.
Best option. CI/CD integrated — trigger training from GitHub with one click. Model persists in Modal volumes between runs.
Sign up free →30 hours per week free — more reliable quota than Colab. Upload the Kaggle notebook and run.
Open Kaggle →Easiest to start. Open notebook in Colab, select T4 GPU runtime, run all cells.
Open in Colab (v3) →Already powering Hancock. Free 1000 requests/day — no fine-tuning needed to start. Use for production inference.
Get API Key →Vertex AI training with T4/A100 GPUs and GCS model storage. $300 free credit for new accounts — enough for multiple training runs.
Start free →Primary model repository for trained Hancock weights and adapters.
cyberviser/hancock-v3
View on HF →
Cloud storage bucket for Vertex AI training artifacts and model checkpoints.
gs://cyberviser-models/
GCS Console →
Persistent training output stored across Modal runs. LoRA adapters and GGUF exports.
modal volume get hancock-models
Modal Dashboard →
Sign up at modal.com — free $30/month credit (~32 hours A10G GPU or ~50 hours T4)
Install Modal CLI and authenticate:
pip install modal && modal token new
Create the secrets store with your API keys:
modal secret create cyberviser-secrets HF_TOKEN=hf_xxx NVIDIA_API_KEY=nvapi-xxx
Run fine-tuning from the Hancock repo:
modal run train_modal.py
Or trigger from GitHub: Actions → Fine-Tune Hancock → Run workflow
Download the trained model:
modal volume get hancock-models hancock_lora ./hancock_lora
modal volume get hancock-models hancock_gguf ./hancock_gguf
Add GitHub Actions secrets for automated CI training:
MODAL_TOKEN_ID and MODAL_TOKEN_SECRET from modal token new
# 1. Clone and generate training data (no GPU needed) git clone https://github.com/cyberviser/Hancock.git cd Hancock pip install -r requirements.txt python hancock_pipeline.py --phase 3 # → data/hancock_v3.jsonl (5,670 samples) # 2a. Train on Modal (recommended) pip install modal && modal token new modal run train_modal.py --push-hub # → uploads to huggingface.co/cyberviser/hancock-mistral-7b-lora # 2b. Train on Kaggle — upload Hancock_Kaggle_Finetune.ipynb # 2c. Train on Colab — open Hancock_Colab_Finetune_v3.ipynb # 3. Run Hancock with fine-tuned model NVIDIA_API_KEY=nvapi-xxx python hancock_agent.py --server # Or with your own weights via llama.cpp / Ollama: ollama create hancock -f Modelfile ollama serve && python hancock_agent.py --server --model hancock