NWAPW Year 2+3

Technical Topics

Part 8: Software Components [LattePanda Power-Up Checklist]

In last year's pedestrian project we identified several distinct software components which we distributed among five small teams for implementation. We can do that again, at least initially...
 

Part 8a: Hardware Interface

We are using the same camera as last year (updated code and documentation are available here on my website), so most of what we learned last year -- reassembling the pixels from the RGB bytes in the Bayer8 data, despeckling, presenting the image in a window for the rest of the world to see -- can be recycled. Last year we looked for a half-dozen posterized solid colors to eliminate the effects of shadows, but this year we need to go the other direction, initially toward a luminance-only image from which to extract the boundary white lines. If you are wildly successful, you might have time to try to infer virtual white lines from a boundary between uniform pavement and the visually noisy off-track stuff like parked cars or vegetation. We can call this the initial image processing.

At the other end of the processing chain we have two kinds of output, presenting the same video image in a window with added annotations so we can see what your software thinks it is doing, and more importantly, actually controlling the car (steering it and setting its speed). Last year Peter did the visual integration but he may have other obligations this year, so you should think about image post-processing as a separate and important software component. You could use my code (part of TrakSim demo program), but I'm sure you can do better.

We have some example code on the LattePanda forum (better documentation and the latest download are available here on my website) for using the Firmata APIs for controlling servos, but determining the proper timing and settings to get the hardware servos to behave consistent with our intentions usually involves substantial trial and error. Servo control might be a software component in its own right, and if you do it well, you will have a valuable skill for a career in robotics or other hardware/software interface fields in the computational universe. If you hope to scale this project up to a larger car with more inertia, there will be a response time lag between the commands issued by the decision-making software components and what the car actually does. You can start to prepare for that kind of project trajectory by emulating a response envelope in your servo control in this small model where the response otherwise seems effectively instantaneous.

A really ambitious group might replace the Firmata code in the Arduino with your own PWM generator (Custom Arduino code), but you would need to learn a lot of specialized tools, which could be a valuable skill in its own right. The Arduino is able to do both PWM output and input simultaneously; one very useful task we could impose on it would be what is now being handled by a hardware deadman switch, which is to detect whether a given pulse train represents the trigger pulled (or not), and to reduce the ESC control from the LP to idle if not.

Another very useful function of Arduino is to decode input signals (such as the "RPM" signal from the car's transmission) and pass that on into the LP so that the control program can know the actual car velocity. A little more cleverness inside the Arduino, and it could optionally route the incoming signals from the radio receiver directly to the car's servos for an immediate "out-of-box" experience (radio control of the car) without changing any wiring, just a control parameter to the Arduino from the Java code in the LP. When we scale up the APW program to other cities, it's important that novice users -- and their mentors -- get a positive experience with very little opportunity for things to fail. Then they can add their own code and know that if it doesn't work, it's their code, not ours
 

Part 8b: Track Following

Finding the middle of the lane in the visual image is pretty easy. Deciding which parts of that virtual middle line to pay attention to in deciding which way to steer the car requires analysis and experimentation. Whatever your camera might see very close to the car cannot have any useful effect on steering the car, because you don't have time to respond to it before the car has passed that part of the track. Similarly, distant parts of the image are irrelevant to steering the car, because you may never get there if the track turns away. You need to build a software model of which parts of the middle line are most relevant, and a weighting scheme to set the steering analysis appropriate to it, so that discontinuities in the middle-line data do not confuse your car.

Noah Koontz, who was on the team last year, offers these observations about some of the cool stuff he found in robotics and computer vision in the last year:
If you aren't familiar with FRC, here it is in a nutshell. Basically, teams build robots to complete challenges and compete against other robots. The robots built are big, heavy, and must move very quickly and efficiently in order to be successful. Oftentimes computer vision is included in the FRC challenge in the form of reflective tape, which can be viewed by a camera surrounded by a light source. The cheesy poofs, a world champion team, did a lecture on how they implement computer vision on their robot, which I would highly recommend watching. Some ideas covered that I think pertain to APW:

- A ton of computer vision happens before the actual vision part. The very first step in detecting reflective tape is reducing the camera exposure, turning your complex blob filtering algorithm into a simple one. We didn't mess very much with camera settings with pedestrian detection, and I think it would be a good idea to cover that stuff early on so that when it comes time to fiddle with the settings we know which knobs to turn.

- There are two main problems with directly feeding pixels off center into a PID machine:

- Latency: Since FRC robots move ridiculously fast, the time taken to transfer and process the frame is too long, as by the time the processor spits out an error the robot has already moved from where the frame was taken, causing the error to overcorrect. They solve this by measuring the robot's change in position using other sensors, then applying that change in position to the detected reflective tape and calculating an error from there. This may or may not be a problem for since we are not constrained to speed.

- Units: Pixels off center is pretty close to, but not quite, degrees. If we are using a slow moving robot and a simple control system units will probably never become an issue. However, if you want to do the latency correction above, or more complex trajectory control, you need an (x,y) target. Fortunately, this turns out to be pretty simple: If we assume that every object we are detecting has a fixed height off the ground (such as a painted line on the road), given the properties of the camera we can solve for the world position of the detected object. The lecture details this using the pinhole camera model at 30:41.


- They don't cover this in detail, but this same FRC team has a system for driving a robot given a set of waypoints using trajectory control as an alternative to PID, which would be far more complex but more effective (less drifting and spinning out) at higher speeds. It would also mean, however, that camera data would be returned in the form of waypoints: something much easier implemented at the start of the project, as we learned last year. I for one, would like to see the car move fast, but you guys know better than I do if something like that is realistic.

Let me know if you have any questions. I'm looking forward to the start of APW!

Thank you,
Noah

Part 8c: Time and Speed

Your first efforts to steer the car will probably leave the speed control manual (using the radio remote). That way, if -- maybe I should say when -- something goes wrong, you can stop the car before it crashes into something (or somebody) simply by releasing the trigger on the remote control. Then you can program the car for a relatively slow steady speed, slow enough so somebody can walk along side and pick the car up off the ground before it crashes into something. Better: we created a deadman switch, which you use the same as when you are controlling the speed from the remote, except that the trigger only disables the LP's controll when you release it. When you have a high degree of justified confidence in your steering control, you can ramp up the velocity, but see Noah's comments above.

As the car speed increases, you will begin to notice the effects of how long it takes your software to process the incoming image before you tell the steering wheels which way to turn. You cannot do sharp turns on the track at high speed, not only because the car might spin out or roll over from lateral acceleration forces, but also because your software cannot respond quickly enough. This is why race tracks have no sharp turns. As you extend your car control towards higher speeds, you need to adjust the planning that goes into steering it to accommodate the increased effects of latency. If your programs sees a sharp turn coming up, you probably want to decelerate the car going into the turn. Speed analysis is an important part of making your car more sophisticated.
 

Part 8d: Developmental Support

Last year one member of the group maintained a GitHub repository for the project code. Somebody needs to be responsible for getting that up and maintaining it again this year.

The LattePanda has Wi-Fi hardware built in, so you might find it convenient to build a Wi-Fi remote interface to view what your software is seeing and doing on other computers connected to the internet (like your laptops). You could also use such an interface to start and stop the car. This is not essential to driving the car, but a great convenience, and seeing in the auditorium projector what the car sees in real time makes a compelling presentation on the last day.
 

Part 8e: Dividing into Work Groups

As I see it, these are the essential software components discussed above, which you should staff appropriately with team members (the links refer back in this page to where the terms are discussed):
 
1. Initial image processing

2. Steering analysis (including PID)

3. Speed analysis (possibly with reference to a map model)

4. Servo control

5. Image post-processing


These are some tasks that may or may not deserve separate working group support, but you should think about:
 

6. GitHub repository

7. Wi-Fi remote interface

8. Custom Arduino code


It looks like we have enough participants, and there are some interesting tasks that can be worked on in parallel, then integrated after the basic car functions work properly:

9.  Building & tracking a "3D model" of the track for predictive steering and speed control

10.  Adjusting the speed for the processing latency (slower for tight turns, faster for straight)

11. Predictive steering, so the car drives more smoothly

12. Detecting a stop sign and stopping at the right place

13. Inferring implied lines from parked cars or other non-traffic lane objects in the scene

14. Staying an appropriate distance behind another car in the same lane

15. Making turns (from a script?) when there are choices

16. Passing another car in the road (probably need extra hardware, like another camera)

17. Parallel parking (probably need extra hardware, like more cameras)

18. Create website for showing off in Belgium & expansion beyond Portland

Part 9: Advanced Projects

The LattePanda has several layers of software capability, which can be exploited to drive the cost down or else the performance up. These are discussed in a separate page.
 

Part 10: Checklist Processes

The hardware we are working with this year is in its "prototype" level of development. Even commercial products (like the LattePanda) sometimes have a less than finished user interface. These checklist sequences of events are preliminary, subject to change as the hardware is developed (possibly even before the 2018 program).

The LattePanda (LP) is not as refined as your average desk or laptop computer. The instructions that came with this board gave a particular sequence for powering it up or down. Here you see (for reference) a partial diagram of the edge of the LP opposite to the USB connectors:

On the near right corner in this diagram are two tiny buttons facing outward from the board. The second one in from the corner is labelled in the diagram as "POWER" and is used in the power-up and -down sequences. On the opposite corner is a little block of four pins sticking straight up, shown here as red and gray. This is where the battery power comes in. It should already be connected in the system you get. Extending along the edge away from the power connector is a line of six 3-pin servo connectors, but only the far three (not shown here) are outputs. They are not keyed, so you must be careful to ensure that the black (or brown) ground wire of each servo cable is closest to the edge, and the white (or yellow) signal wire is closest to the metal plate covering the CPU. There is a blue LED in the corner of that metal plate, which displays "D13" of the Arduino output, under the control of the LP.
 

Part 10a: Powering Up LattePanda (see diagram above)

1. Supply main power (plug in or switch on battery, or plug in wall dongle). The blue Arduino "D13 LED" and the red pilot on the back side will both come on bright, but the Arduino LED will flicker some and then go dim.

2. Wait for the red pilot light to go out on the bottom side of the LP board (about a minute or so).

3. Press the LP "POWER" button.

4. Wait for the Win10 welcome screen (another couple minutes) & log in. Then the LP is ready to use.

Part 10b: Powering Down LattePanda

0. Turn off the ESC switch first. All the LEDs in the car (below the computer deck) should go out when you do this.

1. Press the LP "POWER" button. I have it programmed (in Win10) to Hibernate, which is faster than Shut Down. The screen will go dark immediately, but the hibernation process (or shut down) takes a minute or two, with the red pilot light on the bottom side of the LP board still on.

2. Wait for the red pilot light to go out on the bottom side of the LP board. The blue Arduino LED (if you left it on) will remain on, because the Arduino continues to run as long as there is power.

3. Remove power from the LP by unplugging the battery, or if there is a switch, turning the switch off. If you have separate batteries for the camera and servos, unplug them too.

4. Traxxas warns that the big motor battery should also be unplugged when you are done for the day. This would also be a good time to recharge the battery.

Part 10c: Preparing to Drive the Car with the DeadMan Switch

The "deadman" switch can be used to manually shut the drive motor of car down when (not if) it misbehaves. I tested a circuit like that shown in the TrakSim document, which relies on the remote control speed trigger being depressed to enable the car to drive.
0. Make sure the car is properly positioned on its track, or else up on blocks with the wheels clear of obstructions.

1. Power up the LP (see above) and make sure your software is up to date.

2. If power to the camera and/or steering servo are separately controlled, turn that on next.

3. After you power up the LP, you must run your code at least once to initialize the Arduino PWM. You may find it necessary to terminate this first run before continuing with Step #4.

4. If the motor battery was unplugged, plug it in. Turn on the remote transmitter.

5. Turn the ESC switch on and wait for it to sing its little song. It will start to whine until you press and hold the "deadman" switch (remote transmitter trigger). If it continues to whine, it may be that the LP+Arduino did not start to provide a PWM signal (Step #3). Otherwise it will sing its little song a second time, after which it is ready to take commands. Verify that the whine resumes if you release the trigger. All the lights below the computer deck should be solid green at this time (or maybe one solid yellow, the rest all green)..

6. Start your drive software. If the car starts to run away, release the "deadman" switch.

7. If left idle for any period of time, the ESC will chirp every couple seconds to remind you to turn it off.

8. The ESC will not restart even with the deadman switch closed if the LP/Arduino is sending out a motor control other than stopped. Either terminate your program (which if you did it correctly, resets all servos to their neutral position), or else the program should reset the servos when it detects that the car is stopped.


You can find other checklists in the TrakSim documentation.

Any questions or comments? This is your project.

Tom Pittman -- Starting July 13 through the end of the 4-week workshop, use the email given on July 16 to reach me.

Rev. 2019 January 12