Top Hackernews posts from huggingface.co
- Uncensor any LLM with abliteration (huggingface.co)
- Try Stable Diffusion's Img2Img Mode (huggingface.co)
- LLM in a Flash: Efficient LLM Inference with Limited Memory (huggingface.co)
- Microsoft Phi-2 model changes licence to MIT (huggingface.co)
- Falcon 180B (huggingface.co)
- OpenLLaMA 13B Released (huggingface.co)
- Hugging Face Releases Agents (huggingface.co)
- PaddleOCR: Lightweight, 80 Langauge OCR (huggingface.co)
- Space secrets leak disclosure (huggingface.co)
- BigCode Project Releases StarCoder: A 15B Code LLM (huggingface.co)
- AnimeGANv2: Convert Face Portraits into Anime (huggingface.co)
- We raised $100M for open and collaborative machine learning (huggingface.co)
- SantaCoder: A new 1.1B code model for generation and infilling (huggingface.co)
- Llama 3 8B is almost as good as Wizard 2 8x22B (huggingface.co)
- StackLlama: A hands-on guide to train LlaMa with RLHF (huggingface.co)
- Explaining the SDXL Latent Space (huggingface.co)
- Hugging Face and Google partner for AI collaboration (huggingface.co)
- Mistral-8x7B-Chat (huggingface.co)
- FineWeb: Decanting the web for the finest text data at scale (huggingface.co)
- The age of machine learning as code has arrived (huggingface.co)
- Yi-34B-Chat (huggingface.co)
- GPT-3.5 and Wolfram Alpha via LangChain (huggingface.co)
- The Falcon has landed in the Hugging Face ecosystem (huggingface.co)
- HuggingChat: Chat with Open Source Models (huggingface.co)
- Hugging Face and AWS partner to make AI more accessible (huggingface.co)
- HuggingFace Training Cluster as a Service (huggingface.co)
- More than 80 AI models from Qualcomm (huggingface.co)
- Segmind Stable Diffusion – A smaller version of Stable Diffusion XL (huggingface.co)
- LLaMA-Pro-8B (huggingface.co)
- HuggingChat (huggingface.co)
- Yarn-Mistral-7B-128k (huggingface.co)
- Apple/OpenELM: Efficient Open-Source Family Language Models (huggingface.co)
- Sparse LLM Inference on CPU: 75% fewer parameters (huggingface.co)
- Pokemon GAN (huggingface.co)
- Switch Transformers C – 2048 experts (1.6T params for 3.1 TB) (2022) (huggingface.co)
- Show HN: Simply Reading Analog Gauges – GPT4, CogVLM Can't (huggingface.co)
- Multimodal Neurons in Pretrained Text-Only Transformers (huggingface.co)
- HuggingChat – ChatGPT alternative with open source models (huggingface.co)
- Find images from movies based on what you draw (huggingface.co)
- OpenLLaMA 7B Training Completed to 1T Tokens (huggingface.co)
- MSFT's WizardLM2 models have been taken down (huggingface.co)
- Phi-2 (huggingface.co)
- Dolphin-2_6-Phi-2 (huggingface.co)
- Alibaba releases 72B LLM with 32k context length (huggingface.co)
- LiteLlama-460M-1T has 460M parameters trained with 1T tokens (huggingface.co)
- Large Language Models: A New Moore's Law? (huggingface.co)
- LLaMA 3 70B Llamafiles (huggingface.co)