Sisyphus repository
Last update: 1 october 2023 | SRPMs: 18631 | Visits: 37508662
en ru br
ALT Linux repos
S:20230728-alt1

Group :: Sciences/Computer science
RPM: llama.cpp

 Main   Changelog   Spec   Patches   Sources   Download   Gear   Bugs and FR  Repocop 

Current version: 20230728-alt1
Build date: 29 july 2023, 22:04 ( 38.6 weeks ago )
Size: 1205.07 Kb

Home page:   https://github.com/ggerganov/llama.cpp

License: MIT
Summary: Inference of LLaMA model in pure C/C++
Description:

Plain C/C++ implementation (of inference of LLaMA model) without
dependencies. AVX, AVX2 and AVX512 support for x86 architectures.
Mixed F16/F32 precision. 4-bit, 5-bit and 8-bit integer quantization
support. Runs on the CPU. Supported models:

   LLaMA
   LLaMA 2
   Alpaca
   GPT4All
   Chinese LLaMA / Alpaca
   Vigogne (French)
   Vicuna
   Koala
   OpenBuddy (Multilingual)
   Pygmalion 7B / Metharme 7B
   WizardLM
   Baichuan-7B and its derivations (such as baichuan-7b-sft)

NOTE 1: You will need to:

 pip3 install -r /usr/share/llama.cpp/requirements.txt

for data format conversion scripts to work.

NOTE 2:
 MODELS ARE NOT PROVIDED. You need to download them from original
 sites and place them into "./models" directory.

 For example, LLaMA downloaded via public torrent link is 220 GB.

Overall this is all raw and experimental, no warranty, no support.

Current maintainer: Vitaly Chikunov

List of contributors

List of rpms provided by this srpm:

    ACL:
       
      design & coding: Vladimir Lettiev aka crux © 2004-2005, Andrew Avramenko aka liks © 2007-2008
      current maintainer: Michael Shigorin