people_recognition_3d.people_recognizer_3d¶
Attributes¶
Classes¶
Dictionary of all joints, the following joins could be available: |
|
Functions¶
|
Function to start and wait for dependent service |
|
Method to get service response with checks |
|
Function to generate an affine transformation frame given the x_vector, z_direction and |
Module Contents¶
- people_recognition_3d.people_recognizer_3d._get_and_wait_for_service(srv_name, srv_class)[source]¶
Function to start and wait for dependent service
- Param:
srv_name: Service name
- Param:
srv_class: Service class
- Returns:
started ServiceProxy object
- people_recognition_3d.people_recognizer_3d._get_service_response(srv, args)[source]¶
Method to get service response with checks
- Param:
srv: service
- Param:
args: Input arguments of the service request
- Returns:
response
- people_recognition_3d.people_recognizer_3d.Joint¶
- people_recognition_3d.people_recognizer_3d.get_frame_from_vector(x_vector, translation, z_direction=kdl.Vector(0, 0, 1))[source]¶
Function to generate an affine transformation frame given the x_vector, z_direction and translation of the frame.
- How this works:
Any two given vectors form a plane so, x_vector and z_direction can be considered as such vectors. Taking vector cross-product of these two vectors will give a vector perpendicular to the plane.
First normalize the x_vector to get a unit_x vector.
- Take cross product of z_direction and unit_x, the will give the
y_direction. Normalize y_direction to get the unit_y vector.
Take the cross product between unit_x and unit_y to get unit_z
- Param:
x_vector: The x_vector in some coordinate frame.
- Param:
origin: The origin of the frame to be created
- Param:
z_direction (default kdl.Vector(0, 0, 1)): The direction of z
- Returns:
frame: KDL frame
- class people_recognition_3d.people_recognizer_3d.Skeleton(body_parts)[source]¶
Bases:
object
Dictionary of all joints, the following joins could be available:
Nose Neck {L, R}{Shoulder, Elbow, Wrist, Hip, Knee, Ankle, Eye, Ear}
Constructor
- Parameters:
body_parts (Mapping[str, Joint]) – {name: Joint}
- body_parts¶
- class people_recognition_3d.people_recognizer_3d.PeopleRecognizer3D(recognize_people_srv_name, probability_threshold, link_threshold, heuristic, arm_norm_threshold, neck_norm_threshold, waving_threshold, vert_threshold, hor_threshold, padding)[source]¶
Bases:
object
- Parameters:
- _recognize_people_srv¶
- _bridge¶
- _threshold¶
- _link_threshold¶
- _heuristic¶
- _arm_norm_threshold¶
- _neck_norm_threshold¶
- _waving_threshold¶
- _vert_threshold¶
- _hor_threshold¶
- _padding¶
- recognize(rgb, depth, camera_info)[source]¶
Service call function
- Param:
rgb: RGB Image msg
- Param:
depth: Depth Image_msg
- Param:
depth_info: Depth CameraInfo msg
- Parameters:
rgb (sensor_msgs.msg.Image) –
depth (sensor_msgs.msg.Image) –
camera_info (sensor_msgs.msg.CameraInfo) –
- recognitions_to_joints(recognitions, cv_depth, cam_model, regions_viz, scale)[source]¶
Method to convert 2D recognitions of body parts to Joint named tuple
- Param:
recognitions: List of body part recognitions
- Param:
cv_depth: cv2 Depth image
- Param:
cam_model: Depth camera model
- Param:
regions_viz: numpy array the size of cv_depth to store depth values of the ROIs
- Param:
scale: Scaling factor of ROIs based on difference in size of RGB and D images
- Returns:
joints: List of joints of type Joint