Sisyphus repositório
Última atualização: 1 outubro 2023 | SRPMs: 18631 | Visitas: 37571491
en ru br
ALT Linux repositórios
S:20230728-alt1

Group :: Ciências/Informática
RPM: llama.cpp

 Principal   Changelog   Spec   Patches   Sources   Download   Gear   Bugs e FR  Repocop 

A versão atual: 20230728-alt1
Data da compilação: 29 julho 2023, 22:04 ( 39.1 weeks ago )
Tamanho:: 1205.07 Kb

Home page:   https://github.com/ggerganov/llama.cpp

Licença: MIT
Sumário: Inference of LLaMA model in pure C/C++
Descrição:

Plain C/C++ implementation (of inference of LLaMA model) without
dependencies. AVX, AVX2 and AVX512 support for x86 architectures.
Mixed F16/F32 precision. 4-bit, 5-bit and 8-bit integer quantization
support. Runs on the CPU. Supported models:

   LLaMA
   LLaMA 2
   Alpaca
   GPT4All
   Chinese LLaMA / Alpaca
   Vigogne (French)
   Vicuna
   Koala
   OpenBuddy (Multilingual)
   Pygmalion 7B / Metharme 7B
   WizardLM
   Baichuan-7B and its derivations (such as baichuan-7b-sft)

NOTE 1: You will need to:

 pip3 install -r /usr/share/llama.cpp/requirements.txt

for data format conversion scripts to work.

NOTE 2:
 MODELS ARE NOT PROVIDED. You need to download them from original
 sites and place them into "./models" directory.

 For example, LLaMA downloaded via public torrent link is 220 GB.

Overall this is all raw and experimental, no warranty, no support.

Mantenedor currente: Vitaly Chikunov

Lista dos contribuidores

Lista dos rpms provida por esta srpm:

    ACL:
       
      projeto & código: Vladimir Lettiev aka crux © 2004-2005, Andrew Avramenko aka liks © 2007-2008
      mantenedor atual: Michael Shigorin
      mantenedor da tradução: Fernando Martini aka fmartini © 2009