Hackernews posts about MosaicML
MosaicML is a generative artificial intelligence startup that has developed MPT-7B, a commercially-usable large language model (LLM) comparable in quality to the popular LLaMA model.
- MosaicML MPT-7B: A Commercially-Usable LLaMa-Quality Model (www.mosaicml.com)
- Training LLMs with AMD MI250 GPUs and MosaicML (www.mosaicml.com)
- Databricks Signs Definitive Agreement to Acquire MosaicML (www.databricks.com)
- Training Stable Diffusion from Scratch for <$50k with MosaicML (www.mosaicml.com)
- Databricks Signs Definitive Agreement to Acquire MosaicML (www.databricks.com)
- Cloudflare R2 and MosaicML enable training LLMs anywhere (blog.cloudflare.com)
- Databricks acquires OpenAI Competitor MosaicML for 1.3B (www.mosaicml.com)
- Databricks picks up MosaicML, an OpenAI competitor, for $1.3B (techcrunch.com)
- MosaicML Agrees to Join Databricks to Power Generative AI for All (www.mosaicml.com)
- Mosaic ML: MPT-30B-Chat (huggingface.co)
- Training Stable Diffusion from Scratch for <$50k with MosaicML (www.mosaicml.com)
- Llama2-70B with MosaicML Inference (www.mosaicml.com)
- Databricks Agrees to Acquire MosaicML, a Leading Generative AI Platform (www.databricks.com)
- Databricks buys AI darling MosaicML for $1.3B (blocksandfiles.com)
- MosaicML Joins Databricks (twitter.com)
- MosaicML launches new service in bid to challenge OpenAI on price (www.reuters.com)
- Training Stable Diffusion from Scratch Costs <$160k (www.mosaicml.com)
- MPT-30B: Raising the bar for open-source foundation models (www.mosaicml.com)
- Training LLMs with AMD MI250 GPUs (www.mosaicml.com)
- Llama 1.3B Trained on 200B Tokens for Commercial Use (huggingface.co)
- PubMed GPT: A Domain-Specific Large Language Model for Biomedical Text (www.mosaicml.com)
- Training Stable Diffusion from Scratch for <$50k (www.mosaicml.com)
- MosaicBERT: Pretraining Bert from Scratch for $20 (www.mosaicml.com)
- MPT-7B-StoryWriter-65k+: LLM for super long contexts (Apache 2.0) (huggingface.co)
- MPT-30B – Apache 2.0 licensed LLM (huggingface.co)
- MPT-7B-8K: 8K Context Length for Document Understanding (www.mosaicml.com)
- Training Stable Diffusion from Scratch Costs <$160k (www.mosaicml.com)
- Benchmarking Large Language Models on Nvidia H100 GPUs (www.mosaicml.com)