insideAI News is pleased to announce being a Media Partner for the upcoming AI Hardware & Edge AI Summit happening Sept. 9-12, 2024 in San Jose, Calif. Register now using the special insideAI News discount code “Insideai15” HERE. Editor-in-Chief & Resident Data Scientist, Daniel D. Gutierrez will be attending in-person to keep a pulse on this advancing industry. He will be conducting interviews with some of the movers and shakers from the AI industry ecosystem.
The AI Hardware & Edge AI Summit is the ultimate destination for the entire AI and ML ecosystem, with a collaborative mission to train, deploy and scale machine learning systems that are fast, affordable, and efficient. Whether it’s forging new partnerships, staying ahead of the ever-changing semi-conductor landscape, learning how to build, train, and deploy efficient systems, meeting peers, learning from AI luminaries, or simply gaining exposure to the world of AI infrastructure, you’ll find over 1,200 likeminded people at our event. Take it from the thousands of industry peers who have attended in the past, if you’re in the AI infrastructure and semiconductor worlds, this is one not to miss!
“I was really delighted at the number of leaders, thought-leaders and big thinkers, that I was able to speak with and think through how hardware can continue to drive AI’s progress,” said Andrew Ng, Founder & CEO, Landing AI (speaking about the AI Hardware & Edge AI Summit 2023).
Featured Companies
At the AI Hardware & Edge AI Summit, insideAI News plans to highlight a number of companies from the AI industry vendor ecosystem. Below, you’ll find a short description of each of these compelling companies:
Amazon Web Services
Amazon Web Services (AWS) offers a variety of services to help businesses use artificial intelligence (AI) and machine learning (ML) to build, run, and integrate solutions. AWS services can help with use cases like application development, natural language processing, and AI-generated content.
AMD
Advanced Micro Devices (AMD) is a technology provider that offers a range of AI solutions for a variety of industries, including automotive, healthcare, industrial, and telecommunications. AMD’s AI solutions are designed for workloads such as large-scale model training, real-time inferencing, video analytics, recommendation engines, and immersive gaming experiences.
Ayar Labs
Ayar Labs is a company that develops optical I/O solutions to address data bottlenecks in AI systems. Their goal is to maximize the performance and compute efficiency of AI infrastructure while reducing costs, latency, and power consumption. Ayar Labs’ solutions include: TeraPHY optical I/O chiplets and SuperNova remote light source
Blaize
Blaize is a leader in Edge AI computing, offering a transformative architecture for hardware and software advancing AI solutions for automotive, industrial, smart metro, smart retail and security markets. By innovating advances in energy efficiency, flexibility, and usability, Blaize enables developers to innovate new classes of products to bring the benefits of AI and machine learning to a broad range of use cases.
The Blaize Graph Streaming Processor (GSP) architecture enables new levels of processing power at low power with high performance ideal for AI inferencing workloads in edge-based applications. The GSP architecture represents a fundamental way to change how we compute the intense workloads of the future.
Blaize AI Studio software transforms productivity of the complete AI edge application lifecycle from idea through development, deployment, and management. The industry’s first code-free, end-to-end open software platform for the entire AI edge application lifecycle, AI Studio breaks the current application development and MLOps barriers to the adoption of AI edge technology.
BrainChip
BrainChip is a leader in edge AI on-chip processing and learning. The company’s first-to-market convolutional, neuromorphic processor, AkidaTM, mimics the event-based processing method of the human brain in digital technology to classify sensor data at the point of acquisition, processing data with unparalleled energy-efficiency and independent of the CPU or MCU with high precision. On-device learning that is local to the chip without the need to access the cloud dramatically reduces latency while improving privacy and data security. In enabling effective edge computing to be universally deployable across real-world applications, such as connected cars, consumer electronics, and industrial IoT, BrainChip is proving that on-chip AI is the future for customers’ products, the planet and beyond.
Dell Technologies
Dell Technologies (NYSE:DELL) helps organizations and individuals build their digital future and transform how they work, live and play. The company provides customers with the industry’s broadest and most innovative technology and services portfolio for the AI era. Dell provides technologies that accelerate and simplify AI adoption along with PCs, Dell APEX, Multicloud and Edge solutions that transform ideas into action.
AI and GenAI are transforming business at an unprecedented pace, offering a variety of advantages that can give your business a competitive edge in a rapidly evolving landscape. Dell AI Solutions can help unlock key insights from your data and elevate your productivity, customer experience, and innovation.
D-Matrix
D-Matrix is a Santa Clara-based AI-processor startup that develops chips for generative AI and large language models. Their chips use digital in-memory computing (IMC) to run Transformer-based inference models and address bandwidth and latency constraints. D-Matrix’s goal is to make generative AI more cost-effective and scalable, and to change the trajectory of AI energy consumption, which is expected to exceed the human workforce by 2025.
EdgeCortix
EdgeCortix is a fabless semiconductor design company focused on enabling energy-efficient edge intelligence. Founded in 2019 with the radical idea of taking a software first approach, while designing an artificial intelligence specific runtime reconfigurable processor from the ground up using a technique called “hardware & software co-exploration.” Targeting advanced computer vision applications first, using proprietary hardware and software IP on existing processors like Field Programmable Gate arrays (FPGAs) and custom designed Application Specific Integrated Circuits (ASICs), the company is geared towards positively disrupting the rapidly growing AI hardware space across defense, aerospace, smart cities, industry 4.0, autonomous vehicles and robotics.
Furiosa
FuriosaAI, founded in 2017, specializes in high-performance data center AI chips targeting the most capable AI models and applications. Gen 1 product WARBOY (Samsung 14nm), targeting advanced computer vision applications, has successfully entered volume production and is now deployed in public clouds and on-prem data centers. Gen 2 product RNGD (TSMC 5nm; pronounced like Renegade) equipping HBM3 is set to launch this year to address the growing demand for more energy-efficient and powerful computing for LLM and Multimodal deployment.
Intel
Intel creates world-changing technology that improves the life of every person on the planet. Today they are applying their reach, scale, and resources to enable their customers to capitalize more fully on the power of digital technology. Inspired by Moore’s Law, Intel continuously work to advance the design and manufacturing of semiconductors to help address their customers’ greatest challenges. Power AI everywhere with Intel® with proven AI expertise, un unmatched partner ecosystem, and a comprehensive hardware and software portfolio. Intel can help you deliver the AI results you need.
Lemurian Labs
At Lemurian Labs, the mission is to deliver affordable, accessible, and efficient AI computers because the company believes in a world where AI isn’t a luxury but a tool for everyone. The founding team brings together expertise in AI, compilers, numerical algorithms, and computer architecture to reimagine accelerated computing. The approach makes it possible for organizations of any size to equally benefit from the transformative potential of AI.
Lightning AI
Lightning AI is the all-in-one platform for AI development. Code together. Prototype. Train. Scale. Serve. From your browser – with zero setup. From the creators of PyTorch Lightning.
Nscale
Nscale‘s GPU cloud platform is engineered for the demands of AI, offering high-performance compute optimized for training, fine-tuning, and intensive workloads. From its data centers to software stack, Nscale is vertically integrated in Europe to provide unparalleled performance, efficiency and sustainability.
Positron
Positron delivers vendor freedom and faster inference for both enterprises and research teams, by allowing them to use hardware and software explicitly designed from the ground up for generative and large language models (LLMs). Through lower power usage and drastically lower total cost of ownership (TCO), Positron enables you to run popular open source LLMs to serve multiple users at high token rates and long context lengths. Positron is also designing its own ASIC to expand from inference and fine tuning to also support training and other parallel compute workloads.
Rebellions
The founding team of Rebellions relocated to Korea from New York and elsewhere in 2020 to revolutionize AI chip industry. At the heart of the Korean Silicon Eco-system, Rebellions has built a cutting-edge AI inference accelerator and full-stack software optimized for generative AI.
Within just three years of its inception, the company has introduced two groundbreaking chips: the finance market focused ION, released in 2021, and the datacenter-focused ATOM, taped-out in 2023. ATOM has demonstrated its superior performance in the MLPerf benchmarks and has been commercialized in a data center through a strategic partnership with KT(Korea Telecom), the biggest IDC company in South Korea.
Currently, Rebellions is developing its next-generation AI chip, REBEL, equipped with HBM3E in a collaboration with Samsung Electronics, paving the way for the advanced technology in the era of generative AI.
Recogni
Recogni designs and builds multimodal GenAI inference systems for data centers. Recogni’s systems are powered by Pareto, the logarithmic math number system that supports AI inferencing at data center scale. Pareto radically simplifies AI compute by turning multiplications into additions making our chips smaller, faster, and less energy-hungry without compromising accuracy.
With a global footprint in Europe and North America, Recogni is home to industry-leading talent across chip design, AI/ML, systems engineering, networking, software, and business. Our mission: build the most compute-dense and energy-efficient GenAI inference system to help data centers maximize the utilization of compute, space, and energy.
SambaNova
SambaNova is a computing startup focused on building the industry’s most advanced systems platform to run AI applications from the datacenter to the edge.
SiMa
SiMa.ai is the software-centric, embedded edge machine learning system-on-chip (MLSoC) company. SiMa.ai delivers one platform for all edge AI that flexibly adjusts to any framework, network, model, sensor, or modality. Edge ML applications that run completely on the SiMa.ai MLSoC see a tenfold increase in performance and energy efficiency, bringing higher fidelity intelligence to ML use cases spanning computer vision to generative AI, in minutes. With SiMa.ai, customers unlock new paths to revenue and significant cost savings to innovate at the edge across industrial manufacturing, retail, aerospace, defense, agriculture, and healthcare.
Hitachi Ventures
Hitachi Ventures has been exploring agentic AI infrastructure and applications, even experimenting with building its own task management and summarization agents. When the company began last year, the market was largely dominated by “ChatGPT” wrappers and open-source frameworks. However, they’ve seen more founders and operators assembling components to build fully automated workflows capable of handling unstructured data and dynamic contexts.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insideainews/
Join us on Facebook: https://www.facebook.com/insideAINEWSNOW