www.robowars.org

RoboWars Australia Forum Index -> General Chatter

Towards Autonomous Combat
Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9  Next

Post new topic   Reply to topic
  Author    Thread
Jaemus
Experienced Roboteer


Joined: 01 Apr 2009
Posts: 2674
Location: NSW


 Reply with quote  

quote:
Originally posted by marto:
Also if anyone is still living in windows land


you mean, the rest of the world? Very Happy
_________________
<Patrician|Away> what does your robot do, sam
<bovril> it collects data about the surrounding environment, then discards it and drives into walls

Post Tue Nov 30, 2010 11:29 am 
 View user's profile Send private message Send e-mail MSN Messenger
marto
Experienced Roboteer


Joined: 08 Jul 2004
Posts: 5459
Location: Brisbane, QLD


 Reply with quote  

Pretty much. Yes it is a bit of a long way round but its hard enough to get it reliable with a person so keeping the robot and automation stuff as separate as possible is probably a good thing.

So the kinect makes a lot of data. I have saved the depth image to a ROS .bag file here : http://dl.dropbox.com/u/815267/kinectDepthVideo.bag which can be viewed with image_viewer. You probably need to run through the ROS tutorials first before you jump into this. Its about 30sec long and is just the kinect tilting up and down on my desk.

220mb for 30sec is pretty big and thats just images. I did save point cloud data but it was 2.2gb for 30seconds. So I couldn't upload that. The size of data is enormous.


Steve
_________________
Steven Martin
Twisted Constructions
http://www.botbitz.com

Post Tue Nov 30, 2010 11:30 am 
 View user's profile Send private message Send e-mail MSN Messenger
chris...



Joined: 20 Oct 2010
Posts: 28


 Reply with quote  

quote:
Originally posted by marto:
220mb for 30sec is pretty big and thats just images. I did save point cloud data but it was 2.2gb for 30seconds. So I couldn't upload that. The size of data is enormous.


Steve


and you want to process all this data, quickly...

Whats the difference between the point cloud data and this depth image .bag file?
Where does the point cloud data come from?
Is it a separate stream or an extraction of the depth image?

Post Tue Nov 30, 2010 1:23 pm 
 View user's profile Send private message
marto
Experienced Roboteer


Joined: 08 Jul 2004
Posts: 5459
Location: Brisbane, QLD


 Reply with quote  

10Hz should be achievable. ROS includes a point could message type which is used to give each point a location in 3D space this is generated from the 2D depth image.

Obviously you can't process all that data in real time. But making a few simple assumptions can drastically reduce the amount of data you need to process. Firstly you would only take a subset of the data. We would put a bounding box on the image which could be considered the floor. Then do a background subtraction. Then all you are left with should be the points that correspond to the robots. So you could then try to fit those to 3D models of the robots. ROS already has capability to do most of this.

Steve
_________________
Steven Martin
Twisted Constructions
http://www.botbitz.com

Post Tue Nov 30, 2010 1:34 pm 
 View user's profile Send private message Send e-mail MSN Messenger
marto
Experienced Roboteer


Joined: 08 Jul 2004
Posts: 5459
Location: Brisbane, QLD


 Reply with quote  

First two operations are probably easiest to do in an image processing style.
_________________
Steven Martin
Twisted Constructions
http://www.botbitz.com

Post Tue Nov 30, 2010 1:36 pm 
 View user's profile Send private message Send e-mail MSN Messenger
chris...



Joined: 20 Oct 2010
Posts: 28


 Reply with quote  

I assume the battle area is a box type object
With a known fixed camera position you should be able to eliminate all but the robots

Post Tue Nov 30, 2010 1:57 pm 
 View user's profile Send private message
Jaemus
Experienced Roboteer


Joined: 01 Apr 2009
Posts: 2674
Location: NSW


 Reply with quote  

The Sidetracked arena isnt a box, but i suppose you're a long way off taking it interstate Smile
_________________
<Patrician|Away> what does your robot do, sam
<bovril> it collects data about the surrounding environment, then discards it and drives into walls

Post Tue Nov 30, 2010 1:58 pm 
 View user's profile Send private message Send e-mail MSN Messenger
marto
Experienced Roboteer


Joined: 08 Jul 2004
Posts: 5459
Location: Brisbane, QLD


 Reply with quote  

I think we will stick with a box for now. I think I am a long way off trusting such a system with anything above beetles.

So you don't have to deal with all the points just the ones which don't fit the shape of the arena. In theory the background subtraction should be able to remove all extraneous points from the image but you then have to assume nothing else in the image is moving other than the robots. So its better to draw a box around what you think is the floor and ignore everything else.

We can also use different transforms and use multiple kinects. (Might need a 8 core i7 but it could be done.)

IIRC the PCL library has the capability to match Cups and boxes with 3d models of them. So this could be applied pretty well to the robots in the arena to give you a 3d estimate of the position.

Steve
_________________
Steven Martin
Twisted Constructions
http://www.botbitz.com

Post Tue Nov 30, 2010 2:13 pm 
 View user's profile Send private message Send e-mail MSN Messenger
chris...



Joined: 20 Oct 2010
Posts: 28


 Reply with quote  

Draw a box around the floor? that simple... you be better of deleting the floor and everything outside the box and youll have 2 clusters of dots, robot 1 and robot 2

How are the points orientated, is origin the camera?
What do you plan for the position of the kinetic relative to the arena?

What happens with robots that have moving limbs, wont the static model be incorrect once the robot changes or breaks?

How will you distinguish between 2 similar robots?

Post Tue Nov 30, 2010 2:51 pm 
 View user's profile Send private message
marto
Experienced Roboteer


Joined: 08 Jul 2004
Posts: 5459
Location: Brisbane, QLD


 Reply with quote  

So basically the arena is a cube which has clear walls, kinect/camera would be mounted directly above it.

Because the walls are clear you will see any movement which happens outside them so you draw a box around where you are seeing the floor to exclude everything outside it. Then do a background subtraction, look at arena without robots then exclude everything except what changes. (so you are just left with robots like you said)

Options for robot with moving limbs. You really just need to tell which robot is the best match to each model. If it changes just a bit then it's fine. If it changes to look exactly like the other robot then you may have a problem.

As for differentiating two robots which look the same. Then you can do some smart stuff. Such as you assume the robot can't teleport across the arena so robot 1 is still close to it's previous pose estimate. You can also look at the velocity of the robot and compare that to the commanded velocity. If the robot you think is robot 1 is doing what robot 2 was told to do then you have the estimates the wrong way around. Could also look at colour. Or use markers as a backup.

I am not saying it is failure proof, but it's a much easier problem the having to identify an arbitrary object.
_________________
Steven Martin
Twisted Constructions
http://www.botbitz.com

Post Tue Nov 30, 2010 4:16 pm 
 View user's profile Send private message Send e-mail MSN Messenger
Valen
Experienced Roboteer


Joined: 07 Jul 2004
Posts: 4436
Location: Sydney


 Reply with quote  

processing 8mbyte/second is nothing, you might even be able to push that in python.
If you throw some numpy at it to handle the array maths you could easily do 30FPS of depth processing data.

a simple bounding box fit would be sufficient I'd imagine, assume that the bots are generally going to be "flat" to the floor give one of the game engines the point cloud for the blob, then do particle inside box tests on a number of rotations, the orientation with the most points inside the bounding boxes wins.

If you really want to push the image processing, use CUDA, there are python bindings for it as well as pycuda (though that seems to have died)

I'm unsure if multiple kinects will play nice when they can see each others IR spots.

chris... You might want to attend a NSW event before you go too far about putting computers and fancy sensors in your bot. The last people who tried that chickened out when they saw Plan-B, and its a wuss by todays standards.
having a sensor overview of the arena, decent computer power giving commands to the bot is an eminently sensible approach. The only addition I'd think about making would be streaming high rate IMU data back to the controller, alternately,keep that inside the bot and add some smarts to it, should make that whole "drive in a straight line" thing easier.

Talking with a bunch of american rocket people the other day, being thanksgiving the topic of flying turkeys came up. It was a short step from there to "assuming a spherical turkey of uniform density"
_________________
Mechanical engineers build weapons, civil engineers build targets

Post Tue Nov 30, 2010 6:17 pm 
 View user's profile Send private message Visit poster's website AIM Address Yahoo Messenger MSN Messenger ICQ Number
Jaemus
Experienced Roboteer


Joined: 01 Apr 2009
Posts: 2674
Location: NSW


 Reply with quote  

'ow could a five ounce swallow carry a 12 pound coconut?
_________________
<Patrician|Away> what does your robot do, sam
<bovril> it collects data about the surrounding environment, then discards it and drives into walls

Post Tue Nov 30, 2010 6:29 pm 
 View user's profile Send private message Send e-mail MSN Messenger
Bort
Experienced Roboteer


Joined: 15 Jun 2004
Posts: 696
Location: Sydney, NSW


 Reply with quote  

It could grip it by the husk
_________________
Farnsworth - "I hate these nerds. Just because I'm stupider than them they think they're smarter than me."

Post Tue Nov 30, 2010 7:36 pm 
 View user's profile Send private message Send e-mail
marto
Experienced Roboteer


Joined: 08 Jul 2004
Posts: 5459
Location: Brisbane, QLD


 Reply with quote  

Ah python. You just sort of write code and the shit magically does itself.
Then if its not fast enough re-write in C.
_________________
Steven Martin
Twisted Constructions
http://www.botbitz.com

Post Tue Nov 30, 2010 10:59 pm 
 View user's profile Send private message Send e-mail MSN Messenger
chris...



Joined: 20 Oct 2010
Posts: 28


 Reply with quote  

Thats a very convultated way of over complexing a seemingly simple problem

Lets say you do actually get this working. Then what?

How does out of state robot team 1 go about getting their machine automated and tested to use this process?
They cant use the provided software? everybody will be using the same or a choice of pre-made AI'S
I imagine they will have to create their own program... from what data, how does the home builder test their own software that will work in that setup?
It's not as easy to test as just operating the robot yourself in the garage, the human operator provides the feedback loop and is exceptionally adaptive to changes in environment. The only reliable way to test a kinect looped arena is with the same setup, software and hardware since thats where they will be getting their location information from. If not they will have to basically re-create a different way to get location data and make it compatible with the kinect version. Which leeads back to, everyone has to work out the position of their own robot anyway to test their programs

how is sensors and a computer a bad idea, while imu data and smarts is a good idea? What is the sensitivity and range of the IMU you plan to use. Hope you plan not to use it to determine angular poisition and only for going in a straight line. If the robot is bashed with too much force it may push it to its limit. Once that happens its angular position may not be correct. Remember that IMU accumulate errors.

How do you plan to use the IMU data if you have separate loops? Adjust the received control data of indented direction modified by actual direction?
Wont you run into an loop echo as your getting correction information from both observed and measured?

Post Tue Nov 30, 2010 11:09 pm 
 View user's profile Send private message
  Display posts from previous:      

Forum Jump:
Jump to:  

Post new topic   Reply to topic
Page 3 of 9

Goto page Previous  1, 2, 3, 4, 5, 6, 7, 8, 9  Next

Forum Rules:
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum

 

Last Thread | Next Thread  >
Powered by phpBB: © 2001 phpBB Group
millenniumFalcon Template By Vereor.