Mobile Tutorial 3 project build order
A) If you have built all the other projects from the previous solutions step by step, you just need to build the projects in Column A.
B) If you haven’t built any of the previous tutorial solutions, you need to build all included projects in the order shown in Column B.
3. ArticulatedArm (in abstract services)
4. DriveDifferentialTwoWheel (in abstract services)
5. KUKARobotTool (in abstract services)
6. KUKAUniversalMotionPlanning (in abstract services)
Most robotic applications need some sort of feedback – due to a certain level of uncertainty in the process. This uncertainty can range from very little uncertainty to completely unknown parameters within the task.
Processes with little uncertainty often need only a little guidance. Examples for processes with little uncertainty would be:
- Windshields are glued into car bodies. The car bodies that arrive on a conveyor usually differ in their position within a couple of centimeters. These need to be compensated for an exact gluing process.
- Small parts arrive on a conveyor with different orientations and need to picked up correctly for placement in some pre-oriented cradle.
Processes with great uncertainty are usually found in service robotics:
- Mapping an unknown environment (with the help of SLAM algorithms)
- Handling objects with changing properties during a motion (moving water in a container)
A robotic arm without sensors needs precise coordinates as a target position in order to be able to do all the calculations that you have seen in the arm tutorials. If we don’t have these precise coordinates, we need to obtain coordinates with the help of sensors.
This used to be a problem in the industrial environment, since sensors were really expensive. But sensor prices come down just like other electronic hardware and therefore sensors become more available. Now processes can rely more and more on live data. This means for simulation environments, that they also should allow integrating sensors that behave as realistic as possible, so that the simulated results can be used or transferred as cost effective as possible into the real world.
In this tutorial, we add virtual sensors to our machine. The two sensor types that we use are already provided within the Robotics Studio from Microsoft. We add two laser range sensors and a webcam. To see the sensor feedback, the dashboard now has an extra page to show the sensor data of the laser range scanners. The webcam can be observed through the Internet explorer. (In the task tutorial, you will see the webcam image integrated in the dashboard.)
Laser range finder
Since we use the laser range finder that ships with the MS robotics studio, we don’t want to go to deep into the details of how the laser range finder works internally. In this tutorial, we want to show the way we use it and how we integrated it into our simulation environment.
When the laser range finder (LRF) does a measurement, it gives back an array of distance values – Each value comes from one of the laser rays that is sent from the sensor. There are 360 rays that are sent within a half circle from the sensor, so the angular resolution is half a degree. The unit of the distance values is millimeters. With this information you get pretty good values for the relative position of objects to your sensor.
We have two sensors, one for the platform to navigate and one at the end of arm.
In this gripping position, you can see 2 sensor echoes. The one on the table stand is generated by the LRF sitting in the platform. The table in this picture stands to the right side of the platform. The visible laser echo covers only half of the table stand, because it is sitting at the very right edge of the scan area of the platform sensor.
The second visible laser echo is seen right in front of the gripper. This echo comes from the second LRF sitting right there at the end of the arm. The following picture shows the data representation of the LRFs within the dashboard.
The upper data display shows the sensor data from the platform sensor. You can see clearly the table stand at the very right side of the scan area. You can also see an object at roughly 30 degrees from the middle to the left. This is the other table in the scene, which is shown at a greater distance.
The lower display shows the data for the end of arm LRF. It shows a strong echo for the cube that is just in front of the gripper.
When measuring things close to the sensor, objects seem to be indistinguishable from another. The following picture shows the laser being projected at the table with an angle.
In this configuration, the cube is not visible in the dashboard sensor display
The table itself is clearly distinguishable from the echoes coming from the ground, but strength wise the table echo already fills this display and the cube can’t be seen. The raw data though allows distinguishing the cube from the table surface:
A sudden distance decrease of roughly 70mm from the surface to the sensor identifies the cubes position on the table. Edges like these are detectable, but for the sake of clear and robust programming, we always try to position the sensor in a way that ‘8000’ values appear at one side of the edge (‘8000’ is the maximum value that the sensor detects. This value is also returned if no sensor echo is recieved at that particular spot). You will see this type of edge detection used in the task tutorial.
Actually we don’t use the camera in these tutorials for any measurement purposes. We felt though that having a webcam at the end of arm might inspire many useful applications involving image recognition algorithms. So with having a mobile camera, this could be used as a basis for SLAM (Simultaneous localization and mapping) algorithms or any other image registration algorithms.
A current drawback of the provided wabcam is the fact that the camera picture can not roll according to the camera orientation. So the horizon will always appear horizontal in the images. This is especially visible as when the end of arm rotates, the gripper seems to rotate around the center of the picture.
Joint7: 0 deg 45 deg 90 deg
Coding for mobile Tutorial 3:
The dashboard subscribes to notifications from the simulated laser range finders. The incoming data from both sensors is displayed on an extra page on the dashboard; therefore we needed a new page for the dashboard in mobile tutorial 3 as well.
Mobile Tutorial 3 Services Overview:
New services for mobile tutorial 3:
· MobileTutorial3Dashboard: User Interface with an added list control for queued commands
· SimulatedLRF: MSRS provided laser range finder
· SimulatedWebcam: MSRS provided web cam
The diagram shows that the new dashboard actually receives notifications only from the laser range finder entities. These notifications hold the current sensor data. There is no notification from the webcam service to the dashboard, since the webcam picture is not shown in the dashboard of this tutorial. We did include the service in this overview though, since we do instantiate a simulated web cam while starting up the simulation. To see a implementation of how the web cam picture can be displayed in the dashboard, please refer to the task tutorial.
As seen before, two laser range finders are instantiated in the scene, one for the platform and one for the end of arm.
The laser range finder instance for the platform is called ‘LBRPlatform’ and is inserted into the platform entity right after the platform creation.
The laser range finder instance for the end of arm is created as the child entity of a box, since every laser range finder entity needs some parent object. This laser range finder entity itself is named ‘LRFArm’.
This box is then attached at the end of arm of the LBR3 shortly after its creation in the ‘PopulateWorld’ function of the simulation service.
In order to receive the notifications from the laser range finders, we need their port to subscribe for the data and we also need to provide a port for the incoming data. For this, the directory service is queried to find the right laser range finder instances. The directory service answers with a list of running services. From this list, we ‘connect’ to the services that are named “LRFArm” or “LRFPlatform” in terms of storing their ports for the subscription.
We created a function (‘SubscribeToSickLRF’) for this subscription because we need to call it for both sensors and we don’t want to duplicate code.
Now the handler of the corresponding notification port will be able to receive the laser range finder data.
How this data is used for navigation is shown in the task tutorial.
In this tutorial, the data is not used in any algorithm but it is displayed on the sensor page of the dashboard.