Line follower code nxt robot avr sample


If the state of the sensor changed, it generates the appropriate event for us. Integer; -- Event data declaration Signalled: This protected object can be used by different tasks to communicate between them. For example, a task can block on receiving event:. In order to do this, declare and implement a task "EventdispatcherTask". It should call the appropriate API function to read the touch sensor and compare it to it's old state.

A static variable may be useful for that. If the state changed, it should release the corresponding event by using signal procedure of the Event protected object. Just as in part 1, put your code in an infinite loop with a delay in the end of the loop body.

As suggested by the names of the events, the idea is that they should occur as soon as the user presses and releases the attached touch sensor. In order for MotorcontrolTask to have priority over EventdispatcherTask, make sure to assign a lower priority to the latter. Otherwise, the infinite loop containing the sensor reading would just make the system completely busy and it could never react to the generated events.

Add further some nice status output on the LCD. This should complete your basic event-driven program. Compile and upload the program and try whether the car reacts to your commands. Attach a light sensor to your car that is attached somewhere in front of the wheel axis, close to the ground, pointing downwards. Extend the program to also react to this light sensor. The car should stop not only when the touch sensor is released, but also when the light sensor detects that the car is very close to the edge of a table.

You may need to play a little bit with the "Hello World! The car should only start moving again when the car is back on the table and the touch sensor is pressed again. The edge detection should happen in EventdispatcherTask and be communicated to MotorcontrolTask via the event protected object. Use two new events for that purpose.

Make sure you define and use all events properly. Further, the display should provide some useful information about the state of the car. Please hand in only the source of the full second program that includes the light sensor code. Make sure you include brief explanations and that your source is well-commented. Note that hand-ins without meaningful comments will be directly discarded. Real-time schedulers usually schedule most of their tasks periodically. This usually fits the applications: Sensor data needs to be read periodically and reactions in control loops are also calculated periodically and depend on a constant sampling period.

Another advantage over purely event-driven scheduling is that the system becomes much more predictable, since load bursts are avoided and very sophisticated techniques exist to analyze periodic schedules. You will learn about response-time analysis later during the course.

The target application in this part will make your car move forward until the end of the table is reached and then start moving backward. Additionally, the touch sensor is used to tell the car to stop moving backwards and to again move in forward direction. Note that this is a new program again, so for now, do not just extend the program from the event-driven assignment part. Create a new program instead. The structure of the system in this part is as follows: We have three tasks that are scheduled periodically, with different periods:.

Obviously, the tasks can have different periods, since while we would like the car to react fast to the button press, we can't optically, as humans read updated information from the display faster than in certain intervals anyway. Before we implement the actual tasks, we need a way for them to communicate. We will use a data structure that is used by the sensing tasks to communicate their movement wishes to MotorcontrolTask.

We will use this global variable to pass information about driving between tasks. Before we are done with the program, there is a potential problem to be taken care of. Do you need to protect it in any way? If so, do that. Finally, compile and run the program. So far, the behavior is not too exciting, since all the car can do is just moving forwards. The exciting part will come next. Add a light sensor like part 1. Using the above structure of periodic tasks, extend your program with a fourth periodic task "EdgeDetectionTask".

It should have period ms and read the light sensor value in each instance. Using the sensor reading, it should try to set the driving command to a value that would make the car drive backwards when edge of the table is reached. The task displaying useful information should be extended to display even more useful information.

Further, the following hints may help:. When you are done, compile, upload and test your program. Make sure that the behavior is as desired. Again, note that hand-ins without meaningful comments will be directly discarded. In the last part you will use all the knowledge you acquired in the above parts in order to create a car that can simultaneously:. You may use any of the techniques you learned above to define and schedule tasks, read sensors and send commands to the motors.

The line tracking should be done with the light sensor. The line to be followed will have the following shape: A test track is available from Jakaria's office and during the labs in the corresponding lab rooms. For accurate sensing you will have to recalibrate the reflection values of the track and the background before each race. The reflection value depends on the ambient lighting and the track condition. The race will most likely take place on a new fresh track.

In order to pass the lab, all teams need to demonstrate a working car that can do both jobs accurately. This has to be demonstrated on Monday, The procedure will be as follows:. All the three sensors will be placed in the front side of the robot and all will be pointing downward. One sensor will be placed above the black line and the other two will be on left and right side of middle sensor. The output of the IR sensor changes when the sensors moves from white surface to black surface or from black surface to white surface and this change will be detected by the microcontroller.

But the outputs of the three analog IR sensors are analog in nature, so these signals cannot be processed directly by the microcontroller. For this, we will use the ADC of the ATmega16 microcontroller to convert the analog signals to digital values. After converting the analog signals of analog IR sensors to digital values, the ATmega16 microcontroller will compare the sensor values with a reference value 3V in our case to know the position of the sensors.

When the position of the middle IR sensor is above the black line, the ATmega16 microcontroller will send the forward control signal to the DC motor driver LD of the robot to move the robot in forward direction and when the left sensor is above the black line, the ATmega16 microcontroller will send the left control signal to the DC motor driver LD of the robot to move the robot in left direction and when the right sensor is above the black line, the ATmega16 microcontroller will send the right control signal to the DC motor driver LD of the robot to move the robot in right direction.

The above processes of ADC conversions, sensors output comparison and robot control continues. In this way, the robot follows the black line. My Cart 0 Wishlist 0 Login or Register. If you have a better idea leave a comment. The next step is to sit with a pen and paper and make reading from the track that you just made.

If you had used the circuit diagram that I posted or something similar, you should be having some indication LEDs at the output of the comparators like this one here,.

I placed the output LEDs green directly above the line sensor to avoid confusion. This made my board look really bad. Again was a trade-off between the look and the feature, I choose the feature. It is really helpful if those LEDs are right on top of the sensors. The IR emitter emits a constant IR beam. The white surface reflects most of the beam while the black surface absorbs most of the beam. This reflected beam is picked up by the IR detector and its conduction increases and hence a voltage variation in the output pin 0v for absence of IR rays and 5V for maximum intensity.

This is given to the comparator to compare with a reference signal generated by the potentiometer. That is how the line is sensed by the sensors. Back to the post, you have to place the bot over the line and see what it reads for various position of the line.

Make sure that the line is always detected at least by one sensor if it is within the range of the three sensors. This is something for which you should take utmost care. I even mentioned in my previous post the maximum distance between the sensors should not exceed the width of the line.

With the calibration values at hand, you can start programming your robot.