CuspAI Raises $30M Seed Round for AI-Powered Material Search Engine

Have you heard about a search engine that can search materials for you? The answer could be a big ‘No.’ There is an initiative to it now. Cambridge-based startup CuspAI is challenging traditional methods of material discovery. It has come up with an AI-powered material search engine. Investors are considering the new search engine has […]

Smart Factories: Concepts and Features

Exploring how new technologies, including artificial intelligence (AI), revolutionize manufacturing processes.

A smart factory is a cyber-physical system that leverages advanced technologies to analyze data, automate processes, and learn continuously. It’s part of the Industry 4.0 transformation, which combines digitalization and intelligent automation. Here are some key features:

  1. Interconnected Network: Smart factories integrate machines, communication mechanisms, and computing power. They form an interconnected ecosystem where data flows seamlessly.
  2. Advanced Technologies: Smart factories use AI, machine learning, and robotics to optimize operations. These technologies enable real-time decision-making and adaptability.
  3. Data-Driven Insights: Sensors collect data from equipment, production lines, and supply chains. AI processes this data to improve efficiency, quality, and predictive maintenance.
Smart Factory leverages advanced technologies to analyze data, automate processes, and learn continuously.

Automation, Robots, and AI on the Factory Floor

1. Production Automation

  • Robotic Arms: Robots handle repetitive tasks like assembly, welding, and material handling. They enhance precision and speed.
  • Collaborative Robots: These work alongside humans, assisting with tasks like packaging, quality control, and logistics.

2. Quality Inspection

3. IoT + AI: Predictive Maintenance and Energy Efficiency

  • Predictive Maintenance (IoT Sensors): Connected sensors monitor equipment health. AI algorithms predict failures, allowing timely maintenance. This minimizes unplanned downtime and reduces costs.
  • Energy Management and Energy Consumption Analysis: AI analyzes vast data sets to optimize energy usage. It helps reduce waste, manage various energy sources, and enhance sustainability.
  • Predictive Energy Demand: AI predicts energy demand patterns, aiding efficient resource allocation.
AI turning IoT Data into Information: predictive maintenance, automated quality inspection, optimized energy consumption, etc.

AI-Driven Energy Management in Smart Factories

1. Real-Time Energy Optimization

  • IoT Data Integration: Smart factories deploy IoT sensors across their infrastructure to collect real-time data on energy consumption. These sensors monitor machinery, lighting, HVAC systems, and other energy-intensive components.
  • Weather Forecast Integration: By combining IoT data with weather forecasts, AI algorithms predict energy demand variations. For example: when a heatwave is predicted, the factory can pre-cool the facility during off-peak hours to reduce energy costs during peak demand.

2. Dynamic Energy Source Selection

  • Production Schedules and Energy Sources: AI analyzes production schedules, demand patterns, and energy prices. It optimally selects energy sources (e.g., solar, grid, and battery storage) based on cost and availability. For example: during high-demand production hours, the factory might rely on grid power. At night or during low-demand periods, it switches to stored energy from batteries or renewable sources.

3. Predictive Maintenance and Energy Efficiency

  • Predictive Maintenance: AI predicts equipment failures, preventing unplanned downtime. Well-maintained machinery operates more efficiently, reducing energy waste.
  • Energy-Efficient Equipment: AI identifies energy-hungry equipment and suggests upgrades or replacements. For instance: replacing old motors with energy-efficient ones, installing variable frequency drives (VFDs) to optimize motor speed, and others.

4. Demand Response and Load Shifting

  • Demand Response Programs: AI participates in utility demand response programs. When the grid is stressed, the factory reduces non-essential loads or switches to backup power.
  • Load Shifting: AI shifts energy-intensive processes to off-peak hours. For example: running heavy machinery during nighttime when electricity rates are lower, charging electric forklifts during off-peak hours, etc.
Benefits of implementing industrial AI solutions.

Benefits and Dollar Savings

AI and IoT empower smart factories to make data-driven decisions, minimize waste, and contribute to a more sustainable future. Dollar savings, environmental benefits, and operational efficiency go hand in hand.

Moreover, implementing automated visual inspection and AI-driven predictive maintenance in factories enables more benefits like:

Reduced Downtime:

  • Predictive Maintenance: By identifying potential equipment failures before they occur, factories can schedule maintenance during planned downtime. This minimizes unplanned interruptions and keeps production lines running smoothly.

Enhanced Quality Control:

  • Automated Visual Inspection: AI-powered systems detect defects, inconsistencies, or deviations in real-time. This ensures that only high-quality products reach the market.
  • Cost Savings: Fewer defective products mean less waste and rework, leading to cost savings.

Optimized Resource Allocation:

  • Energy Efficiency: AI analyzes energy consumption patterns and suggests adjustments. Factories can allocate resources (such as electricity, water, and raw materials) more efficiently.
  • Resource Cost Reduction: By using resources judiciously, factories reduce expenses.

Improved Safety:

  • Predictive Maintenance: Well-maintained machinery is less likely to malfunction, reducing safety risks for workers.
  • Visual Inspection: Detecting safety hazards (e.g., loose bolts, and faulty wiring) prevents accidents.

Streamlined Inventory Management:

  • Predictive Maintenance: AI predicts spare part requirements. Factories maintain optimal inventory levels, avoiding overstocking or stockouts.
  • Cost Savings: Efficient inventory management reduces storage costs and ensures timely replacements.

Better Workforce Utilization:

  • Predictive Maintenance: Workers focus on value-added tasks instead of emergency repairs.
  • Visual Inspection: Skilled workers can focus on complex inspections, while AI handles routine checks.

Reduced Environmental Impact:

  • Energy Efficiency: By optimizing energy usage, factories contribute to sustainability goals and reduce their carbon footprint.
  • Waste Reduction: Fewer defects mean less waste, benefiting the environment.

Smart factories leverage technologies like AI, IoT, and advanced robotics to optimize efficiency, quality, and competitiveness. Benefits include reduced downtime, increased operational efficiency, improved product quality, enhanced worker safety, and greater flexibility in responding to market demands.

For instance, a large company (Praxair/Linde) achieved up to $10 million/year savings with a 1% increase in operating efficiency, while a 25% reduction in energy usage, operations costs, and downtime resulted in about $55 million in annual savings at a J&J manufacturing base.

These technologies not only enhance efficiency but also lead to tangible cost savings, improved safety, and a more sustainable manufacturing ecosystem. Overall, the adoption of smart factory technologies leads to tangible benefits, cost savings, and operational improvements.

Beyond the Factory Floor: AI in Back Office Automation

While AI plays a crucial role on the factory floor, its impact extends beyond production lines. Back-office automation is equally vital. Here are a few examples:

In summary, AI transforms not only the factory floor but also the entire organizational ecosystem.

Let’s meet at IDC Smart Factory conference in Katowice in Poland! June 20th, 2024
Keynote Presentation at IDC Smart Factory conference

Learn more:


Smart Factories: Concepts and Features was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Former Amazon Scientist Reveals Why Alexa Lagged in AI Competition

Amazon’s Alexa was one of the earliest assistants and gained a significant head start over its competitors. However, it has been left behind with the rise in AI. It has failed to maintain a leading position. Former senior machine learning scientist at Alexa AI shared his own perspectives on why it fell behind in the […]

Former Cybereason Executives Unveil Seven AI, Close $36M Seed Round

Cybersecurity is an ever-evolving segment and the emergence of Seven AI marks a significant milestone. It is all set to redefine how we defend against cyber threats. The Boston-based Seven AI was founded in late 2023 by Lior Div and Yonatan Striem-Amit. They are prominent figures and had earlier propelled Cybereason to unicorn status. The […]

3 Important Considerations in DDPG Reinforcement Algorithm

Photo by Jeremy Bishop on Unsplash

Deep Deterministic Policy Gradient (DDPG) is a Reinforcement learning algorithm for learning continuous actions. You can learn more about it in the video below on YouTube:

https://youtu.be/4jh32CvwKYw?si=FPX38GVQ-yKESQKU

Here are 3 important considerations you will have to work on while solving a problem with DDPG. Please note that this is not a How-to guide on DDPG but a what-to guide in the sense that it only talks about what areas you will have to look into.

Noise

Ornstein-Uhlenbeck

The original implementation/paper on DDPG mentioned using noise for exploration. It also suggested that the noise at a step depends on the noise in the earlier step. The implementation of this noise is the Ornstein-Uhlenbeck process. Some people later got rid of this constraint about the noise and just used random noise. Based on your problem domain, you may not be OK to keep noise at a step related to the noise at the earlier step. If you keep your noise at a step dependent on the noise at the earlier step, then your noise will be in one direction of the noise mean for some time and may limit the exploration. For the problem I am trying to solve with DDPG, a simple random noise works just fine.

Size of Noise

The size of noise you use for exploration is also important. If your valid action for your problem domain is from -0.01 to 0.01 there is not much benefit by using a noise with a mean of 0 and standard deviation of 0.2 as you will let your algorithm explore invalid areas using noise of higher values.

Noise decay

Many blogs talk about decaying the noise slowly during training, while many others do not and continue to use un-decayed during training. I think a well-trained algorithm will work fine with both options. If you do not decay the noise, you can just drop it during prediction, and a well-trained network and algorithm will be fine with that.

Soft update of the target networks

As you update your policy neural networks, at a certain frequency, you will have to pass a fraction of the learning to the target networks. So there are two aspects to look at here — At what frequency do you want to pass the learning (the original paper says after every update of the policy network) to the target networks and what fraction of the learning do you want to pass on to the target network? A hard update to the target networks is not recommended, as that destabilizes the neural network.

But a hard update to the target network worked fine for me. Here is my thought process — Say, your learning rate for the policy network is 0.001 and you update the target network with 0.01 of this every time you update your policy network. So in a way, you are passing 0.001*0.01 of the learning to the target network. If your neural network is stable with this, it will very well be stable if you do a hard update (pass all the learning from the policy network to the target network every time you update the policy network), but keep the learning rate very low.

Neural network design

While you are working on optimizing your DDPG algo parameters, you also need to design a good neural network for predicting action and value. This is where the challenge lies. It is difficult to tell if the bad performance of your solution is due to the bad design of the neural network or an unoptimized DDPG algo. You will need to keep optimizing on both fronts.

While a simpleton neural network can help you solve Open AI gym problems, it will not be sufficient for a real-world complex problem. The principle I follow while designing a neural network is that the neural network is an implementation of your (or the domain expert’s) mental framework of the solution. So you need to understand the mental framework of the domain expert in a very fundamental manner to implement it in a neural network. You also need to understand what features to pass to the neural network and how to engineer the features in a way that the neural network can interpret them to successfully predict. And that is where the art of the craft lies.

I still have not explored discount rate (which is used to discount rewards over time-steps) and have not yet developed a strong intuition (which is very important) about it.

I hope you liked the article and did not find it overly simplistic or stupid. If liked it, please do not forget to clap!


3 Important Considerations in DDPG Reinforcement Algorithm was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Will AI Take Big Part in Lok Sabha Election

India is in the mid of the 2024 Lok Sabha elections and the role of technology is under scrutiny. In fact, the role of artificial intelligence (AI) is being talked more. It has been witnessed in recent years that political campaigns have increasingly relied on social media. And now AI stands tall to reshape the […]

Reliable AI Model Tuning : Leveraging HNSW Vector with Firebase Genkit

Instant AI Model Tuning: Leveraging HNSW Vector with Firebase Genkit for Retrieval-Augmented Generation

The rapid advancements in Generative AI have transformed how we interact with technology, enabling more intelligent and context-aware systems. A critical component in achieving this is Retrieval-Augmented Generation (RAG), which allows AI models to pull in specific contexts or knowledge without the need to build or retrain models from scratch.

One of the most efficient technologies facilitating this is the Hierarchical Navigable Small World (HNSW) graph-based vector index. This article will guide you through the setup and usage of the Genkit HNSW Vector index plugin to enhance your AI applications, ensuring they are capable of providing highly accurate and context-rich responses.

Understanding Generative AI

https://voiceoc.com

For those who still do not understand what generative AI, feel free to read about it here!

Fine-tuning in Generative AI

Image by Author

Fine-tuning is a great method to improve your AI Model! with fine-tuning, you can add more knowledge and context for the AI Model.

There are various ways to implement fine-tuning, so it is important to know how we can leverage the AI Model maximally to fit our application requirements.

If you want to read more about them and its differences, you can read more here!

Now, that we know about Generative AI and Fine-Tuning, we will learn how we can implement Retrieval-Augmented Generation (RAG) using HNSW Index.

Implementing Retrieval-Augmented Generation (RAG)

Generative AI’s capabilities can be significantly enhanced when integrated with an HNSW vector index to implement the RAG mechanism. This combination allows the AI to retrieve and utilize specific contextual information efficiently, leading to more accurate and contextually relevant outputs.

Example Use Case

Consider a restaurant application or website where specific information about your restaurants, including addresses, menu lists, and prices, is integrated into the AI’s knowledge base. When a customer inquires about the price list of your restaurant in Surabaya City, the AI can provide precise answers based on the enriched knowledge.

Example Conversation with AI Model :

You: What are the new additions to the menu this week?
AI: This week, we have added the following items to our menu:
- Nasi Goreng Kampung - Rp. 18.000
- Sate Ayam Madura - Rp. 20.000
- Es Cendol - Rp. 10.000

With RAG we can achieve a very detailed and specific response from the AI Model.

Now, to implement this, we will be using :

  • HNSW Vector
    We will convert our defined data into a Vector index, where it can be understood by the AI Model so that the AI Model can have a better response.
  • Firebase Genkit (Our special guest! :D)
    We will use this to demonstrate this Retrieval-Augmented Generation (RAG) using HNSW Vector index and Gemini AI Model.

Implementing HNSW Vector index

What is HNSW?

HNSW stands for Hierarchical Navigable Small World, a graph-based algorithm that excels in vector similarity search. It is renowned for its high performance, combining fast search speeds with exceptional recall accuracy. This makes HNSW an ideal choice for applications requiring efficient and accurate retrieval of information based on vector embeddings.

Why Choose HNSW?

  • Simple Setup: HNSW offers a straightforward setup process, making it accessible even for those with limited technical expertise.
  • Self-Managed Indexes: Users have the flexibility to handle and manage the vector indexes on their servers.
  • File-Based Management: HNSW allows the management of vector indexes as files, providing ease of use and portability, whether stored as blob or stored in a database.
  • Compact and Efficient: Despite its small size, HNSW delivers fast performance, making it suitable for various applications.

Learn more about HNSW.

Implementing Firebase Genkit

https://firebase.google.com/docs/genkit

What is Firebase Genkit?

Firebase Genkit is a powerful suite of tools and services designed to enhance the development, deployment, and management of AI-powered applications. Leveraging Firebase’s robust backend infrastructure.

Genkit simplifies the integration of AI capabilities into your applications, providing seamless access to machine learning models, data storage, authentication, and more.

Key Features of Firebase Genkit

  • Seamless Integration: Firebase Genkit offers a straightforward integration process, enabling developers to quickly add AI functionalities to their apps without extensive reconfiguration.
  • Scalable Infrastructure: Built on Firebase’s highly scalable cloud infrastructure, Genkit ensures that your AI applications can handle increased loads and user demands efficiently.
  • Comprehensive Suite: Genkit includes tools for data management, real-time databases, cloud storage, authentication, and more, providing a comprehensive solution for AI app development.

Enhancing Generative AI with Firebase Genkit

By integrating Firebase Genkit with your Generative AI applications, you can significantly enhance the functionality and user experience. Here’s how Firebase Genkit contributes to the effectiveness of AI applications:

  1. Real-Time Data Handling: Firebase Genkit’s real-time database allows for the immediate update and retrieval of data, ensuring that your AI models always have access to the latest information. This is particularly useful for applications that require dynamic content generation based on current data, such as chatbots and recommendation systems.
  2. Scalable AI Deployments: Leveraging Firebase’s cloud infrastructure, Genkit enables scalable deployments of AI models. This means that as your application grows and user demand increases, the infrastructure can automatically scale to meet these needs without compromising performance.
  3. Simplified Data Management: With Firebase’s integrated data storage and management tools, developers can easily handle the data required for training and operating AI models. This includes capabilities for storing large datasets, real-time updates, and secure data handling.

To start using Firebase Genkit in your AI applications, follow these steps:

  1. Set Up Firebase: Create a Firebase project and set up your real-time database, storage, and authentication services.
  2. Install Genkit: Integrate Genkit into your project by following the installation instructions provided in the Genkit documentation.
  3. Configure Plugins: Add and configure the necessary Genkit plugins for data management, AI model integration, and user authentication.

Learn more about Firebase Genkit

Now let’s practice to learn more how we can build such an AI Solution!

Setting Up the Genkit HNSW Plugin

Prerequisites

Before installing the plugin, ensure you have the following installed:

  • Node.js (version 12 or higher)
  • npm (comes with Node.js)
  • TypeScript (install globally via npm: npm install -g typescript)
  • Genkit (install globally via npm: npm install -g genkit)

First thing first, initiate the Genkit project with

genkit init

follow the instructions here.

Once you have the Genkit project installed, make sure the project is well prepared. You can try first by

genkit start

If it runs well and open the Genkit UI in a browser, then you are good to go!

Installing the HNSW plugin

To install the Genkit HNSW plugin, run the following command:

npm install genkitx-hnsw

We will be using 2 Genkit Plugins here.

  1. HNSW Indexer plugin
  2. HNSW Retriever plugin

1. HNSW Indexer Plugin

The HNSW Indexer plugin helps create a vector index from your data, which can be used as a knowledge reference for the HNSW Retriever.

Data Preparation

Prepare your data or documents, for instance, restaurant data, in a dedicated folder.

Registering the HNSW Indexer Plugin

Import the plugin into your Genkit project:

find genkit.config.ts file in your project, usually /root/src/genkit.config.ts.

Then import the plugin into the file.

import { hnswIndexer } from "genkitx-hnsw";
// 
export default configureGenkit({
plugins: [
hnswIndexer({ apiKey: "GOOGLE_API_KEY" })
]
});

Running the Indexer

  1. Open the Genkit UI and select the registered HNSW Indexer plugin.
  2. Execute the flow with the required parameters:
  • dataPath: Path to your data and documents.
  • indexOutputPath: Desired output path for the generated vector store index.

Vector Store Index Result

The HNSW vector store will be saved in the specified output path, ready for use with the HNSW Retriever plugin.

2. HNSW Retriever Plugin

The HNSW Retriever plugin processes prompt with the Gemini LLM Model, enriched with additional specific information from the HNSW Vector index.

Registering the HNSW Retriever Plugin

Import the necessary plugins into your Genkit project:

import { googleAI } from "@genkit-ai/googleai";
import { hnswRetriever } from "genkitx-hnsw";
export default configureGenkit({
plugins: [
googleAI(),
hnswRetriever({ apiKey: "GOOGLE_API_KEY" })
]
});

Running the Retriever

  1. Open the Genkit UI and select the HNSW Retriever plugin.
  2. Execute the flow with the required parameters:
  • prompt: Your input query is for the AI.
  • indexPath: Path to the vector index file generated by the HNSW Indexer plugin.

Example Prompt

To ask about the price list of a restaurant in Surabaya City:

prompt: "What is the price list of my restaurant in Surabaya City?"
indexPath: "/path/to/your/vector/index"

Conclusion

The integration of HNSW Vector index with Genkit significantly enhances the capabilities of Generative AI models by providing enriched context and specific knowledge.

This approach not only improves the accuracy of AI responses but also simplifies the process of knowledge integration, making it a powerful tool for various applications.

By following the steps outlined in this article, you can effectively leverage the HNSW Vector index to build more intelligent and context-aware AI systems in a very short time like instantly!

Hope this helps and see you in the next one!


Reliable AI Model Tuning : Leveraging HNSW Vector with Firebase Genkit was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

CInA: A New Technique for Causal Reasoning in AI Without Needing Labeled Data

AI Robot

Causal reasoning has been described as the next frontier for AI. While today’s machine learning models are proficient at pattern recognition, they struggle with understanding cause-and-effect relationships. This limits their ability to reason about interventions and make reliable predictions. For example, an AI system trained on observational data may learn incorrect associations like “eating ice cream causes sunburns,” simply because people tend to eat more ice cream on hot sunny days. To enable more human-like intelligence, researchers are working on incorporating causal inference capabilities into AI models. Recent work by Microsoft Research Cambridge and Massachusetts Institute of Technology has shown progress in this direction.

About the paper

Recent foundation models have shown promise for human-level intelligence on diverse tasks. But complex reasoning like causal inference remains challenging, needing intricate steps and high precision. Tye researchers take a first step to build causally-aware foundation models for such tasks. Their novel Causal Inference with Attention (CInA) method uses multiple unlabeled datasets for self-supervised causal learning. It then enables zero-shot causal inference on new tasks and data. This works based on their theoretical finding that optimal covariate balancing equals regularized self-attention. This lets CInA extract causal insights through the final layer of a trained transformer model. Experiments show CInA generalizes to new distributions and real datasets. It matches or beats traditional causal inference methods. Overall, CInA is a building block for causally-aware foundation models.

Key takeaways from this research paper:

  • The researchers proposed a new method called CInA (Causal Inference with Attention) that can learn to estimate the effects of treatments by looking at multiple datasets without labels.
  • They showed mathematically that finding the optimal weights for estimating treatment effects is equivalent to using self-attention, an algorithm commonly used in AI models today. This allows CInA to generalize to new datasets without retraining.
  • In experiments, CInA performed as good as or better than traditional methods requiring retraining, while taking much less time to estimate effects on new data.

My takeaway on Causal Foundation Models:

  • Being able to generalize to new tasks and datasets without retraining is an important ability for advanced AI systems. CInA demonstrates progress towards building this into models for causality.
  • CInA shows that unlabeled data from multiple sources can be used in a self-supervised way to teach models useful skills for causal reasoning, like estimating treatment effects. This idea could be extended to other causal tasks.
  • The connection between causal inference and self-attention provides a theoretically grounded way to build AI models that understand cause and effect relationships.
  • CInA’s results suggest that models trained this way could serve as a basic building block for developing large-scale AI systems with causal reasoning capabilities, similar to natural language and computer vision systems today.
  • There are many opportunities to scale up CInA to more data, and apply it to other causal problems beyond estimating treatment effects. Integrating CInA into existing advanced AI models is a promising future direction.

This work lays the foundation for developing foundation models with human-like intelligence through incorporating self-supervised causal learning and reasoning abilities.


CInA: A New Technique for Causal Reasoning in AI Without Needing Labeled Data was originally published in Becoming Human: Artificial Intelligence Magazine on Medium, where people are continuing the conversation by highlighting and responding to this story.

Hyderabad’s Hyperleap AI Unveils Generative AI Platform After $225K Pre-Seed Round

Hyderabad-based startup Hyperleap AI has come up with an advanced Generative AI platform that is specifically designed for businesses. Prior to the release, it successfully secured a capital of $225,000 in pre-seed funding round. The new investment was secured from several angel investors including Anil Kommineni, who is a Senior Vice President at Zenoti. The […]