Skip Navigation
Can you fine-tune on localized steering of an LLM?
  • Sorry, I really don't care to continue talking about the difference between supervised and unsupervised learning. It's a pattern used to describe how you are doing ML. It's not a property of a dataset (you wouldn't call Dataset A "unsupervised"). Read the Wikipedia articles for more details.

  • Can you fine-tune on localized steering of an LLM?
  • No, in that case there's no labelling required. That would be unsupervised learning.

    https://en.wikipedia.org/wiki/Unsupervised_learning

    Conceptually, unsupervised learning divides into the aspects of data, training, algorithm, and downstream applications. Typically, the dataset is harvested cheaply "in the wild", such as massive text corpus obtained by web crawling, with only minor filtering (such as Common Crawl). This compares favorably to supervised learning, where the dataset (such as the ImageNet1000) is typically constructed manually, which is much more expensive.

  • Can you fine-tune on localized steering of an LLM?
  • Ground truth labels are just prescriptive labels that we recognize as being true. The main thing that distinguishes unsupervised from supervised is that in unsupervised learning, what is "good" is learned from the unstructured data itself. In supervised learning, what is "good" is learned from some external input, like "good" human-provided examples.

  • Can you fine-tune on localized steering of an LLM?
  • No, it's unsupervised. In pre-training, the text data isn't structured at all. It's books, documents, online sources, all put together.

    Supervised learning uses data with "ground truth" labels.

  • Can you fine-tune on localized steering of an LLM?
  • This pre-training was done by Meta. It's what Llama-3.1-405B is (in contrast to Llama-3.1-405B-Instruct). https://huggingface.co/meta-llama/Llama-3.1-405B

    Training Data

    Overview: Llama 3.1 was pretrained on ~15 trillion tokens of data from publicly available sources. The fine-tuning data includes publicly available instruction datasets, as well as over 25M synthetically generated examples.

  • Can you fine-tune on localized steering of an LLM?
  • The article you linked to uses SFT (supervised fine tuning, a specific training technique) as its alignment strategy. There are other ways to fine-tune a model.

    I guess I'm wondering if you can train on these partial responses without needing the full rest of the output, without the stop token, or if you need full examples as the article hints to.

  • Can you fine-tune on localized steering of an LLM?
  • Can SFT be used on partial generations? What I mean by a "steer" is a correction to only a portion, and not even the end, of model output.

    For example, a "bad" partial output might be:

    <assistant> Here are four examples:
    1. High-quality example 1
    2. Low-quality example 2
    

    and the "steer" might be:

    <assistant> Here are four examples:
    1. High-quality example 1
    2. High-quality example 2
    

    but the full response will eventually be:

    <assistant> Here are four examples:
    1. High-quality example 1
    2. High-quality example 2
    3. High-quality example 3
    4. High-quality example 4
    

    The corrections don't include the full output.

  • Can you fine-tune on localized steering of an LLM?

    I want to fine tune an LLM to "steer" it in the right direction. I have plenty of training examples in which I stop the generation early and correct the output to go in the right direction, and then resume generation.

    Basically, for my dataset doing 100 "steers" on a single task is much cheaper than having to correct 100 full generations completely, and I think each of these "steer" operations has value and could be used for training.

    So maybe I'm looking for some kind of localized DPO. Does anyone know if something like this exists?

    18
    Llama 3.3 70b - End of open-weight pretrained models from Meta or just a better Llama 3.1 405b finetune?
  • Thank you so much, that exactly answers my question with the official response (that guy works at Meta) that confirms it's the same base model!

    I was concerned primarily because in the release notes it strangely didn't mention it anywhere, and I thought it would have been important enough to mention.

  • Llama 3.3 70b - End of open-weight pretrained models from Meta or just a better Llama 3.1 405b finetune?

    People are talking about the new Llama 3.3 70b release, which has generally better performance than Llama 3.1 (approaching 3.1's 405b performance): https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_3

    However, something to note: > Llama 3.3 70B is provided only as an instruction-tuned model; a pretrained version is not available.

    Is this the end of open-weight pretrained models from Meta, or is Llama 3.3 70b instruct just a better-instruction-tuned version of a 3.1 pretrained model?

    Comparing the model cards: 3.1: https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md 3.3: https://github.com/meta-llama/llama-models/blob/main/models/llama3_3/MODEL_CARD.md

    The same knowledge cutoff, same amount of training data, and same training time give me hope that it's just a better finetune of maybe Llama 3.1 405b.

    6
    What models can we use for img2img today?

    I'd like to fine tune a model that does img2img with a text prompt to guide the output. I think img2img-turbo might be the closest to what I'm after, though by default it uses a fixed prompt which can be made variable with some tweaking of the training code.

    At the moment I only have access to 24GB VRAM which limits my options. What I'm after is training a model to make specific text-based modifications to images, and I have plenty of before to after images plus the modification text prompts to train on. Worst case, I can try to see if reducing the image size during training makes it possible with my setup.

    Are there any other options available today?

    1
    InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)HO
    hok @lemmy.dbzer0.com
    Posts 3
    Comments 11