Taiwan’s Foxconn, the world’s largest contract electronics manufacturer, announced the launch of its first large language model, “FoxBrain,” designed to improve its manufacturing and supply chain management processes. The model, which leverages cutting-edge artificial intelligence, was developed using 120 Nvidia H100 GPUs and took approximately four weeks to complete, the company said in a statement on Monday.
Foxconn, renowned for assembling Apple’s iPhones and producing Nvidia’s AI servers, revealed that the model is based on Meta’s Llama 3.1 architecture. It is Taiwan’s first large language model equipped with reasoning capabilities and is specifically optimized for traditional Chinese and Taiwanese language styles.
While Foxconn acknowledged a slight performance gap between FoxBrain and China’s DeepSeek distillation model, it emphasized that the new model’s performance is close to world-class standards. Initially created for internal use, FoxBrain is designed for a variety of functions, including data analysis, decision support, document collaboration, mathematics, reasoning and problem-solving, and code generation.
Looking ahead, Foxconn plans to collaborate with technology partners to broaden the model’s applications. The company also aims to share the open-source model to foster the integration of AI into manufacturing, supply chain management, and intelligent decision-making.
Nvidia supported the development by providing access to its Taiwan-based supercomputer, Taipei-1, and offering technical consulting during the model’s training. Taipei-1, which is the largest supercomputer in Taiwan, is located in the southern city of Kaohsiung.
Foxconn is set to reveal more details about FoxBrain at Nvidia’s GTC developer conference, which will take place in mid-March.
Related topic:
Saudi Fund to Invest $100 Million in AirAsia