Apple releases OpenELM, a slightly more accurate LLM • The Register

Despite its reputation for secrecy, Apple's OpenELM generative AI model surpasses language models trained on public data sets.

OpenELM uses 2x fewer pretraining tokens and is 2.36 percent more accurate than February's OLMo. It may be enough to show that Apple is no longer satisfied to remain the AI industry's wallflower.

Apple claims transparency by releasing the model and training and evaluation framework.

In the technical paper, eleven Apple researchers explain that our release includes the complete framework for training and evaluating the language model on publicly available datasets, including training logs, 

multiple checkpoints, and pre-training configurations, unlike previous practices that only provided model weights and inference code and pre-trained on private datasets.

Contrary to academic tradition, authors' email addresses are not listed. Apple's openness stance is similar to OpenAI's somewhat closed one.

The accompanying software lacks an open source license. It's not overly restrictive, but it states that Apple may file a patent claim if any derivative work based on OpenELM infringes on its rights.

Layer-wise scaling helps OpenELM allocate transformer model parameters more efficiently. 

Select Red Wings eliminated from playoff race despite comeback victory against CanadiensRed Wings eliminated from playoff race despite comeback victory against Canadiens 

Thanks For Watching

Read More :