This commit is contained in:
Daniel Hládek 2025-08-19 10:36:18 +02:00
parent c9dea26fd2
commit c3445e1106

View File

@ -30,6 +30,25 @@ Tasks:
- Prepare a final report with analysis, experiments and conclusions.
- Publish the fine-tuned models in HF HUB. Publish the paper from the project.
Meeting 19.8.
State:
- Fine tuned Slovak Mistral 7B
- Tried Llama3 7B - results look ok, but MIstral is Better.
- Tried gpt-oss, but it does not work because of dependencies.
- Work on preliminary final report.
- ROUGE score is not good for abstractive summarization.
- The best way to evaluate so far is to see it in person.
Tasks:
- Try to fine tune other models. 'google/gemma-3-4b-it, HPLT/hplt2c_slk_checkpoints Qwen/Qwen3-4B'. Results wil be in different branches of the repository.
- Try to automatically evaluate the results using a large LLM. Read some papers about it. Prepare a script using ollama and gpt-oss-20B.
- Work on the final report.
Meeting 4.8.
State: