Center for Artificial Intelligence
Center for Artificial Intelligence (CAI) is a dedicated and specialized institution or department in Universitas Teknokrat Indonesia that focuses on Artificial Intelligence Development. CAI is designed to provide comprehensive support, resources, and guidance to individuals and aspiring technology, with the goal of AI innovation, creating model, utilization of machine learning and deep learning specialists.
Our Team
Product and Facilities
Product
This product is a linear regression-based artificial intelligence algorithm designed to improve water use efficiency in irrigation systems in chili plantations of farmer groups in South Lampung. By leveraging AI and IoT technology, this algorithm offers a smart solution to manage and optimize water usage in real-time.
Key Features:
- Linear Regression Model: The algorithm uses linear regression to predict water requirements based on variables such as weather conditions, soil moisture, and crop growth stage. The model is trained with historical data to ensure high prediction accuracy.
- IoT integration: The system is integrated with IoT sensors to collect data directly from the field. The data obtained from the sensors such as soil moisture and rainfall are used to dynamically update the regression model and adjust the water flow.
- Water Use Optimization: The algorithm optimizes irrigation schedules and volumes to reduce water wastage, ensuring plants get the right amount of water they need.
- Real-Time Monitoring and Analysis: The real-time monitoring feature allows users to view the current condition of the irrigation system and make adjustments when necessary. Data analysis helps in accurate information-based decision making.
- Friendly User Interface: The system comes with an intuitive user interface, making it easy for farmers to access data and manage irrigation systems without the need for in-depth technical knowledge.
This product is an advanced algorithm for tracking and visual odometry specifically designed for Quad Copter (VTOL) robots. This algorithm is designed to improve the navigation and control capabilities of quadcopter robots in following the KRTI (Indonesian Flying Robot Contest) course. By utilizing image processing and visual tracking technology, this system enables the VTOL robot to identify and follow the trajectory with high accuracy.
Key Features:
- Visual Odometry: This algorithm uses visual odometry techniques to estimate the motion and position of the quadcopter based on analyzing the video from the onboard camera. By analyzing the visual changes in the images in real-time, the system can calculate the speed and direction of the quadcopter's movement.
- Object Tracking: Implemented an object tracking method to identify and follow a predefined path in the contest. The algorithm accurately detects and tracks important visual features on the trajectory, ensuring the quadcopter stays on the correct path.
- Real-Time Navigation Adjustment: The system is capable of dynamically adjusting navigation based on visual feedback. If any changes to the trajectory or obstacles arise, the algorithm can immediately adjust the quadcopter's direction and speed.
- Drift Compensation: Overcomes the problem of drift and position deviation with advanced correction methods, ensuring that the quadcopter stays on the desired path with high fidelity.
- System Integration Interface: The algorithm is designed for seamless integration with existing quadcopter control systems, allowing the use of additional sensors and hardware for optimal results.
This AI model is designed to detect and classify different types of waste in real-time on robots participating in the Indonesian Robot Contest (KRTMI). Using computer vision technology and deep learning, this model enables robots to identify and separate waste with high accuracy, improving efficiency and effectiveness in waste management.
Key Features:
- Real-Time Garbage Classification: This AI model is trained to recognize and classify different types of waste, such as plastic, paper, metal, and organics, from images taken by the robot's camera. This detection capability enables the robot to make waste management decisions quickly and accurately.
- Advanced Image Processing: Uses image processing techniques and convolutional neural networks (CNNs) to analyze images of litter and determine its type based on visual features. This model can handle various lighting conditions and viewing angles.
- Sensor Integration: Can be integrated with various sensors, including RGB cameras, to increase detection accuracy and improve litter classification under different conditions.
- Adaptive Learning: Has the ability to learn from new data and improve detection accuracy over time through adaptive learning and fine-tuning techniques.
- User Interface and Controls: Equipped with an intuitive user interface for monitoring and management of litter detection, as well as a control module to manage robot actions based on detection results.
This AI model is specially designed for path, gate, pole, and obstacle avoidance detection in underwater environments for underwater robots. Using computer vision technology and deep learning, this model enables underwater robots to navigate efficiently, recognize paths and obstacles, and perform avoidance automatically in complex underwater environments.
Key Features:
- Path and Gate Detection: The model identifies navigation paths and gates set in an underwater environment. Using advanced image processing techniques, the model can recognize path and gate structures from images taken by underwater cameras.
- Mast and Obstacle Recognition: The ability to detect masts and other obstacles that may be present in the navigation path. The model identifies and classifies objects that could block the path and affect the robot's navigation.
- Real-Time Avoidance: The real-time obstacle avoidance feature allows the robot to detect and avoid unwanted obstacles in the path. The algorithm analyzes imagery in real time to make quick and effective avoidance decisions.
- Image Processing in Sub-Aquatic Conditions: Uses image processing techniques customized for underwater conditions, including reduction of distortion effects and adjustment of poor lighting, to ensure high detection accuracy.
- Underwater Sensor Integration: Can be integrated with various underwater sensors such as sonar, stereo cameras, and depth sensors to improve detection and navigation accuracy.
- User Interface and Control: Equipped with a user interface that allows live monitoring of detection and a control module to respond to detection results with appropriate actions, such as path change or obstacle avoidance.
This AI model is designed for real-time object detection implemented on Search and Rescue (SAR) robots. Using computer vision technology and deep learning, this model enables SAR robots to identify and localize important objects, such as victims, obstacles, and special markings in search and rescue areas. Fast and accurate detection capabilities are key to efficient and safe SAR operations.
Key Features:
- Real-Time Object Detection: The model is capable of recognizing and classifying various objects in real-time from videos or images captured by the robot's sensors. Detectable objects include humans, vehicles, obstacles, and special signs such as emergency signals.
Classification and Localization: Upon detecting an object, the model not only identifies the type of object but also determines its location with precision. This helps the SAR robot to focus on areas that require special attention.
Environmental Condition Adaptation: It can handle a variety of challenging environmental conditions, including low lighting, bad weather, and difficult terrain, by using advanced image processing techniques and adaptive learning.
Multimodal Sensor Integration: Can be integrated with various sensors such as RGB cameras, thermal cameras, and LiDAR sensors to improve detection accuracy and provide more complete information about the surrounding environment.
Intuitive User Interface: Equipped with a user interface that allows the operator to monitor the detection results directly and make decisions based on the information provided by the robot.
Obstacle Avoidance: In addition to object detection, this model can also be used to identify and avoid obstacles in the robot's path, improving maneuverability and safety during SAR operations.
Facilities
The laboratory has a main role in conducting research related to the fields of robotics and control system science.
The function of the laboratory can be described as follows:
- Training
The Internet of Things Laboratory facilitates lecturers and students to conduct training with the aim of increasing knowledge, skills, and abilities in the field of robotics and control systems.
- Research and development
The Science and Technology Center of Excellence Laboratory facilitates students to carry out research and development in the fields of science and technology, which are the main focus. This includes conducting in-depth research, experimentation, and innovation to produce new knowledge and cutting-edge technology.
- Championship and Competition Supporter
This laboratory supports competition activities by providing the facilities and resources needed for training, testing, and development of products or technology that will be used in the competition.
- Collaboration and Partnership
The Internet of Things Laboratory encourages and manages collaboration with related institutions, academic institutions, government, industry, and organizations. The aim is to increase synergies, support joint research, and expand partnership networks.
In progress
Design and develop image processing system algorithms to improve the detection, classification, and navigation capabilities of autonomous cars under various environmental conditions. These algorithms will enable the vehicle to understand and interact with its surrounding environment with high accuracy and efficiency.
- Planning and Analysis
- Definition of Specifications and Requirements: Develop a detailed specification document of the system needs and project objectives.
- Data and Resource Collection: Collect image datasets and sensor information required for development.
- System Requirements Analysis: Analyzed the technical and functional requirements of the system to ensure proper design.
- Development and Implementation
- Preprocessing Algorithm Development: Implementation of techniques for initial image cleaning and adjustment.
- Segmentation and Feature Extraction Algorithm Development: Implementation of segmentation and feature extraction techniques using convolutional neural networks (CNN) or other methods.
- Detection and Classification Model Implementation: Development and training of object detection and classification models, such as YOLO or SSD.
- Integration with Navigation Systems: Implementation of algorithms for path detection, obstacle avoidance, and route planning.
- Sensor Data Integration: Combining data from various sensors such as RGB cameras, radar, and LiDAR for comprehensive analysis.