Other60k+ stars

LLaMA

Open and efficient foundation language models from Meta

Commit Details

Message
"Initial commit"
Author
Meta AI
Date
2023-02-24
Hash
b1b1b1b1b1b1b1b1b1b1b1b1b1b1b1b1b1b1b1b1

Fun Fact

LLaMA was designed to be more efficient than GPT-3. The 13B parameter model outperforms GPT-3 (175B) on most benchmarks.

</>First Code

Python
# LLaMA - Large Language Model Meta AI
# Copyright (c) Meta Platforms, Inc. and affiliates.

import torch
from transformers import LlamaForCausalLM, LlamaTokenizer

def generate(prompt: str, model, tokenizer, max_length: int = 100):
    """Generate text using LLaMA."""
    inputs = tokenizer(prompt, return_tensors="pt")
    
    # The magic: open-source large language model
    # Democratizing AI research
    outputs = model.generate(
        inputs.input_ids,
        max_length=max_length,
        temperature=0.7,
        top_p=0.9,
    )
    
    return tokenizer.decode(outputs[0], skip_special_tokens=True)