Skip to content
View ypeleg's full-sized avatar

Highlights

  • Pro

Block or report ypeleg

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. LLaMA in 60 Lines LLaMA in 60 Lines
    1
    import json
    2
    import pickle
    3
    import struct
    4
    import zipfile
    5
    import numpy as np
  2. RLAIF RLAIF
    1
    import torch
    2
    from transformers import AutoModelForCausalLM, AutoTokenizer
    3
    
                  
    4
    model = AutoModelForCausalLM.from_pretrained('NousResearch/Nous-Hermes-Llama2-13b', device_map = 'auto')
    5
    tokenizer = AutoTokenizer.from_pretrained('NousResearch/Nous-Hermes-Llama2-13b')
  3. Advanced_Keras_Tutorial Advanced_Keras_Tutorial Public

    Jupyter Notebook

  4. deep-learning-keras-euroscipy2016 deep-learning-keras-euroscipy2016 Public

    Keras@EuroScipy2016

    Jupyter Notebook

  5. HungaBunga HungaBunga Public

    HungaBunga: Brute-Force all sklearn models with all parameters using .fit .predict!

    Python

  6. llama llama Public

    User-friendly LLaMA: Train or Run the model using PyTorch. Nothing else.

    Python