Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)ME
Posts
13
Comments
2
Joined
2 yr. ago
mediocreatbest @lemmy.sdf.org

If a PCI device is completely non-responsive, it's possible to completely remove the device and then re-scan it, hopefully re-initializing the device so it works again.

echo 1 | sudo tee /sys/bus/pci/<pci-id-of-device>/remove and then echo 1 | sudo tee /sys/bus/pci/rescan

mediocreatbest @lemmy.sdf.org

bduggan/raku-jupyter-kernel allows you to run Raku (né Perl 6) within a Jupyter Notebook environment. In terms of onboarding, this seems to be one of the easiest ways to start using Raku.

mediocreatbest @lemmy.sdf.org

Optimizing Deep Learning Models For Raspberry Pi. Custom CNN (on MNIST data) performance from 114ms to 3.75ms. ResNet50 (on "flowers" data): from 1.1s to 1.0s (lowest) or 1.6s (highest).

I'm a little unsure on if I interpreted the results correctly. It seems like some things that TF Lite natively supports (apparently, their custom CNN model trained on MNIST) get really fast, and other things are a little hit-or-miss.

mediocreatbest @lemmy.sdf.org

TinyNeuralNetwork is a library to compress machine learning models through pruning, quantization, and more. Can also convert PyTorch models to TF Lite models.

mediocreatbest @lemmy.sdf.org

Overview of machine learning frameworks that are supported on Raspberry Pi: OpenCV, TF Lite, Tencent ncnn, Tencent TNN, Alibaba MNN, Paddle Lite, ARMnn, MXNet + Gluon, PyTorch, and Caffe.

mediocreatbest @lemmy.sdf.org

Arm NN is an optimized library of tensor operators for machine learning models to use. Support for TF Lite / ONNX models and Raspberry Pi 4 / armv7.

mediocreatbest @lemmy.sdf.org

TextSynth is a hosted service for generating text completions using language models. Free and paid tiers. Could be useful to play with LLMs without a strong computer (Pricing discussion in body text).

I have linked the pricing page because I think that's the most important aspect to a service like this.

The price isn't too expensive, but it also isn't particular cheap either.

Compared to OpenAI's ChatGPT model and generating 1 million tokens (i.e. the King James Bible), you're looking at:

  • OpenAI's gpt-3.5-turbo ("ChatGPT-3.5") is $2 / 1m tokens
  • TextSynth's M2M100 1.2B (cheapest) is $3 / 1m tokens
  • OpenAI's gpt-4 ("ChatGPT-4") is $4 / 1m tokens
  • TextSynth's GPT-Neox 20B (most expensive) is $35 / 1m tokens
mediocreatbest @lemmy.sdf.org

LaMini-LM is a collection of small language models that are accessible to run on local hardware without lots of resources. Models range from 250MB to 6.3GB.

mediocreatbest @lemmy.sdf.org

jncraton/languagemodels is a simple Python library for running LLMs locally. Supports instruction and embedding use cases. Chooses models according to available RAM.

mediocreatbest @lemmy.sdf.org

Altoids tin for watercolor using sculpey modeling clay to create a custom tray for the paints

www.instructables.com Pocket-sized Watercolor Altoids Tin

Pocket-sized Watercolor Altoids Tin: Now that I have made this little kit I can't stop using it! I just started with Instructables, so excuse me if I make any mistakes... :) You will need: Altoids regular tin Altoids Smalls Sculpey clay color of your choice Watercolor tube paints Any …

Pocket-sized Watercolor Altoids Tin
mediocreatbest @lemmy.sdf.org

Taming AI Bots: Prevent LLMs from entering "bad" states using continuous guidance from the LLM ("is this good? bad?") to avoid bad states.

mediocreatbest @lemmy.sdf.org

"Prompt Gisting:" Train two models such that given inputs "Translate French

<G1>

<G2>

" and "

<G1>

G2>The cat," then G1 and G2 represent the entire instruction.

Abstract: "Prompting is now the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and re-encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we present gisting, which trains an LM to compress prompts into smaller sets of "gist" tokens which can be reused for compute efficiency. Gist models can be easily trained as part of instruction finetuning via a restricted attention mask that encourages prompt compression. On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs, gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, storage savings, and minimal loss in output quality. "

mediocreatbest @lemmy.sdf.org

An LLM prompt that is a special kind of summarizer for compressing an idea into as short a text ("tweet") as possible. Includes decompressor.

The prompt: "compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text:"