Summary

1.0 Introduction
2.0 Sensors
      2.1 Line Sensor
      2.2 Signal Receiver
      2.3 Visual Sensor
3.0 Actuators
    3.1 Signal Emitter
4.0 Simulations
      4.1 Robots War
      4.2 Survive
      4.3 Signal Source
      4.4 Pathfind
5.0 Release
6.0 Contact


1.0 Introduction

Justhink is a framework for artificial intelligence simulations. In the framework architecture, the AI elements such as intelligent agent, sensors, actuators and environment, are not specified. The specification of these elements are done by the simulation developer through an API. The framework is white-box, specifications are done through class extensibility. The simulation software, simulation module (mission), agent AI are developed in Java.

The initial version of this framework was developed as part of the completion of coursework of Gabriel Ambrósio Archanjo - advised by Prof. Marcio Henrique Zuchini - at the Computer Science course at Universidade São Francisco.

Every framework AI element is extensible, thus it is possible to suport a wide range of types of simulation. The environment characteristics are defined by the simulation developer, for instance:

- Partially or fully observable;
- Deterministic or stochastic;
- Static or dynamic;
- Discrete or continuous;
- Single agent or multiagent;
- The rules that agents and other simulation objects must repect;

The agent's intelligence module are developed in Java and there there is no restrictions about used resouces. Developers can import any package from any library, so it is possible to integrate other Java AI libraries with the framework.

2.0 Sensors

Sensors give the ability to agents to realize the environment. For the framework, a sensor is a component plugged into the agent that give some information about the environment or the agent itself. To develop a new sensor, it is necessary to extend an abstract sensor class defined in the framework. Three sensor extensions are presented below.

2.1 Line Sensor

The Line Sensor projects a line into the environment and detect objects that are intersecting this line. Intelligent agents may access some informations about the detected object such as the distance from the object and the object color. Different types of object can be differentiated by its color.


Figure 1. An intelligent agent detecting an object using the sensor at the right side. The points p1 and p2 demonstrates the line projected by the sensor. An object is intersected by this line. D is the distance between the agent and the detected object.


2.2 Signal Receiver

The Signal Receiver detects signals transmitted in the environment. For the framework, a signal is an information propagated in the environment. A signal has the following attributes: value/information, intensity, type. A Signal Receiver is associated with one signal type and can only interpret signals of that type. This sensor is useful, for instance, for providing a communication channel between agents in a multiagent environment. The signal intensity decreases during the propagation, therefore the distance between the emissor and the receiver is an important point to achieve the communication.


Figure 2. An intelligent agent detecting a signal transmitted in the environment.




2.3 Visual Sensor

Visual Sensor detects objects inside its field of view. The field of view is determined by the view point, angle of view and the depth. Every object intersecting this field is perceived. The sensor creates a visual representation of the object based on its vertices. This sensor provides a good navigation information for intelligent agents.


Figure 3. (a) An intelligent agent detecting three objects inside his field of view. (b) The visible vertices of each object.


3.0 Actuators

Actuators allow intelligent agents to affect themselves and their environment.



3.1 Signal Emitter

Using Signal Emitter agents can transmit signals in the environment. This sensor can be used to provide communication between agents.


Figure 4. An intelligent agent emitting signals. The signals are represented as circles by the simulation software.


4.0 Simulations

In this section are shown a few simulations developed using the framework.

4.1 Robots War

This simulation creates a battle between robots. Every robot are equipped with a gun and their goal is to destroy the others. This simulation demonstrates a multiagent problem in a competitive situation. The intelligent agents have the ability to detect each other using the visual sensor. The gun has a limited number of shots per time.

Figure 5. Simulation "RobotsWar". The triangle in green represents the agent's field of view. The small squares are the bullets shooted using the gun.


4.2 Survive

In this simulation, intelligent agents must survive in the environment. In the environment, are two resources: water and food. The agents do not have any kind of memmory. After use one resource, if he want to use it again in the future, he needs to find it again. But, in this simulation are 10 agents that has the ability to communicate emitting signals. There are two signals, one indicating the agent found water and another when found food. This simulation demonstrates a multiagent problem in a cooperative situation. Using the intensity of signal in the left and right sensor, agents can estimate the signal origin in two-dimensional space. If an agent dies, he becomes a food resource and emits a involuntary signal, like smell.

Figure 6. Simulation "Survive". The rectangle in blue represents the water resource and the rectangle in brown represents food resource. The blue circles disposed in the environment represents the signals emitted by agents. The lines in green represents the line sensors used by agents to navigate in the map. All simulation elements are recognized by the agents using its color.


4.4 Signal Source

In this simulation the agents in blue have two signal receivers, one in the left side and other in the right. The agent in red has a signal emitter. The objective of the agents in blue is to find the agent in red. The agent in red emmits signals constantly. By the intensity difference between the left and right sensor, agents can estimate the signal source direction. This principle is based on the same mechanism used by human beings.

Figure 7. The agent in red is emitting signals constantly. The other agents are trying to find him by the intensity diffennce of its sensors.


4.4 Path-find

In this simulation, a troop must find the best path between two points in the map. To achieve it, they have all information about the map and the cost to across different types of terrain. To solve this, the intelligent agent representing the troop uses A* algorithm.

Figure 8. Simulation "Pathfind". The green circle represents the path start point. The red circles represent each node of the path. The geometric shapes represent different terrains with different costs to across.


5.0 Release

Currently, the framework source code is not available. The source code will be released along with the API documentation. If you want more information about the framework, contact me.

6.0 Contact

Gabriel Ambrósio Archanjo
leirbag.arc@gmail.com

This project is hosted by