Is there a generic way to reset a PCI device in Linux from the command line? That is, cause the PCI bus to issue a reset command.
If you've never seen this before, I think it's transformative to how you read C/C++ declarations and clearer up a lot of confusion for me when I was learning.
If a PCI device is completely non-responsive, it's possible to completely remove the device and then re-scan it, hopefully re-initializing the device so it works again.
echo 1 | sudo tee /sys/bus/pci/<pci-id-of-device>/remove
and then
echo 1 | sudo tee /sys/bus/pci/rescan
I feel the same way you do. None of the other instances are as appealing to me as the great SDF is.
bduggan/raku-jupyter-kernel allows you to run Raku (né Perl 6) within a Jupyter Notebook environment. In terms of onboarding, this seems to be one of the easiest ways to start using Raku.
Raku Kernel for Jupyter notebooks. Contribute to bduggan/raku-jupyter-kernel development by creating an account on GitHub.
Optimizing Deep Learning Models For Raspberry Pi. Custom CNN (on MNIST data) performance from 114ms to 3.75ms. ResNet50 (on "flowers" data): from 1.1s to 1.0s (lowest) or 1.6s (highest).
I'm a little unsure on if I interpreted the results correctly. It seems like some things that TF Lite natively supports (apparently, their custom CNN model trained on MNIST) get really fast, and other things are a little hit-or-miss.
TinyNeuralNetwork is a library to compress machine learning models through pruning, quantization, and more. Can also convert PyTorch models to TF Lite models.
TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework. - alibaba/TinyNeuralNetwork
Overview of machine learning frameworks that are supported on Raspberry Pi: OpenCV, TF Lite, Tencent ncnn, Tencent TNN, Alibaba MNN, Paddle Lite, ARMnn, MXNet + Gluon, PyTorch, and Caffe.
Deep learning software for Raspberry Pi and alternatives
Arm NN is an optimized library of tensor operators for machine learning models to use. Support for TF Lite / ONNX models and Raspberry Pi 4 / armv7.
Arm NN ML Software. The code here is a read-only mirror of https://review.mlplatform.org/admin/repos/ml/armnn - ARM-software/armnn
TextSynth is a hosted service for generating text completions using language models. Free and paid tiers. Could be useful to play with LLMs without a strong computer (Pricing discussion in body text).
I have linked the pricing page because I think that's the most important aspect to a service like this.
The price isn't too expensive, but it also isn't particular cheap either.
Compared to OpenAI's ChatGPT model and generating 1 million tokens (i.e. the King James Bible), you're looking at:
- OpenAI's
gpt-3.5-turbo
("ChatGPT-3.5") is $2 / 1m tokens - TextSynth's
M2M100 1.2B
(cheapest) is $3 / 1m tokens - OpenAI's
gpt-4
("ChatGPT-4") is $4 / 1m tokens - TextSynth's
GPT-Neox 20B
(most expensive) is $35 / 1m tokens
LaMini-LM is a collection of small language models that are accessible to run on local hardware without lots of resources. Models range from 250MB to 6.3GB.
LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions - mbzuai-nlp/LaMini-LM
jncraton/languagemodels is a simple Python library for running LLMs locally. Supports instruction and embedding use cases. Chooses models according to available RAM.
Explore large language models in 512MB of RAM. Contribute to jncraton/languagemodels development by creating an account on GitHub.
More information on the LocalLLaMA subreddit from the author
Altoids tin for watercolor using sculpey modeling clay to create a custom tray for the paints
Pocket-sized Watercolor Altoids Tin: Now that I have made this little kit I can't stop using it! I just started with Instructables, so excuse me if I make any mistakes... :) You will need: Altoids regular tin Altoids Smalls Sculpey clay color of your choice Watercolor tube paints Any …
Taming AI Bots: Prevent LLMs from entering "bad" states using continuous guidance from the LLM ("is this good? bad?") to avoid bad states.
"Prompt Gisting:" Train two models such that given inputs "Translate French
<G1>
<G2>
" and "<G1>
G2>The cat," then G1 and G2 represent the entire instruction.Abstract: "Prompting is now the primary way to utilize the multitask capabilities of language models (LMs), but prompts occupy valuable space in the input context window, and re-encoding the same prompt is computationally inefficient. Finetuning and distillation methods allow for specialization of LMs without prompting, but require retraining the model for each task. To avoid this trade-off entirely, we present gisting, which trains an LM to compress prompts into smaller sets of "gist" tokens which can be reused for compute efficiency. Gist models can be easily trained as part of instruction finetuning via a restricted attention mask that encourages prompt compression. On decoder (LLaMA-7B) and encoder-decoder (FLAN-T5-XXL) LMs, gisting enables up to 26x compression of prompts, resulting in up to 40% FLOPs reductions, 4.2% wall time speedups, storage savings, and minimal loss in output quality. "
An LLM prompt that is a special kind of summarizer for compressing an idea into as short a text ("tweet") as possible. Includes decompressor.
The prompt: "compress the following text in a way that fits in a tweet (ideally) and such that you (GPT-4) can reconstruct the intention of the human who wrote text as close as possible to the original intention. This is for yourself. It does not need to be human readable or understandable. Abuse of language mixing, abbreviations, symbols (unicode and emoji), or any other encodings or internal representations is all permissible, as long as it, if pasted in a new inference cycle, will yield near-identical results as the original text:"