people_recognition_3d.people_recognizer_3d

Attributes

Joint

Classes

Skeleton

Dictionary of all joints, the following joins could be available:

PeopleRecognizer3D

Functions

_get_and_wait_for_service(srv_name, srv_class)

Function to start and wait for dependent service

_get_service_response(srv, args)

Method to get service response with checks

geometry_msg_point_to_kdl_vector(msg)

get_frame_from_vector(x_vector, translation[, z_direction])

Function to generate an affine transformation frame given the x_vector, z_direction and

Module Contents

people_recognition_3d.people_recognizer_3d._get_and_wait_for_service(srv_name, srv_class)[source]

Function to start and wait for dependent service

Param:

srv_name: Service name

Param:

srv_class: Service class

Returns:

started ServiceProxy object

people_recognition_3d.people_recognizer_3d._get_service_response(srv, args)[source]

Method to get service response with checks

Param:

srv: service

Param:

args: Input arguments of the service request

Returns:

response

people_recognition_3d.people_recognizer_3d.Joint
people_recognition_3d.people_recognizer_3d.geometry_msg_point_to_kdl_vector(msg)[source]
people_recognition_3d.people_recognizer_3d.get_frame_from_vector(x_vector, translation, z_direction=kdl.Vector(0, 0, 1))[source]

Function to generate an affine transformation frame given the x_vector, z_direction and translation of the frame.

How this works:

Any two given vectors form a plane so, x_vector and z_direction can be considered as such vectors. Taking vector cross-product of these two vectors will give a vector perpendicular to the plane.

  1. First normalize the x_vector to get a unit_x vector.

  2. Take cross product of z_direction and unit_x, the will give the

    y_direction. Normalize y_direction to get the unit_y vector.

  3. Take the cross product between unit_x and unit_y to get unit_z

Param:

x_vector: The x_vector in some coordinate frame.

Param:

origin: The origin of the frame to be created

Param:

z_direction (default kdl.Vector(0, 0, 1)): The direction of z

Returns:

frame: KDL frame

class people_recognition_3d.people_recognizer_3d.Skeleton(body_parts)[source]

Bases: object

Dictionary of all joints, the following joins could be available:

Nose Neck {L, R}{Shoulder, Elbow, Wrist, Hip, Knee, Ankle, Eye, Ear}

Constructor

Parameters:

body_parts (Mapping[str, Joint]) – {name: Joint}

body_parts
filter_body_parts(threshold)[source]

Method to remove body parts from a Skeleton object based on the maximum length of a link

Param:

threshold: Maximum length of a link

Returns:

Skeleton object containing body parts within the threshold

Parameters:

threshold (float) –

__getitem__(key)[source]
__contains__(item)[source]

:returns [Point], with point pairs for all the links

__repr__()[source]
class people_recognition_3d.people_recognizer_3d.PeopleRecognizer3D(recognize_people_srv_name, probability_threshold, link_threshold, heuristic, arm_norm_threshold, neck_norm_threshold, waving_threshold, vert_threshold, hor_threshold, padding)[source]

Bases: object

Parameters:
  • recognize_people_srv_name (str) –

  • probability_threshold (float) –

  • link_threshold (float) –

  • heuristic (str) –

  • arm_norm_threshold (float) –

  • neck_norm_threshold (float) –

  • waving_threshold (float) –

  • vert_threshold (float) –

  • hor_threshold (float) –

  • padding (int) –

_recognize_people_srv
_bridge
_threshold
_heuristic
_arm_norm_threshold
_neck_norm_threshold
_waving_threshold
_vert_threshold
_hor_threshold
_padding
recognize(rgb, depth, camera_info)[source]

Service call function

Param:

rgb: RGB Image msg

Param:

depth: Depth Image_msg

Param:

depth_info: Depth CameraInfo msg

Parameters:
  • rgb (sensor_msgs.msg.Image) –

  • depth (sensor_msgs.msg.Image) –

  • camera_info (sensor_msgs.msg.CameraInfo) –

recognitions_to_joints(recognitions, cv_depth, cam_model, regions_viz, scale)[source]

Method to convert 2D recognitions of body parts to Joint named tuple

Param:

recognitions: List of body part recognitions

Param:

cv_depth: cv2 Depth image

Param:

cam_model: Depth camera model

Param:

regions_viz: numpy array the size of cv_depth to store depth values of the ROIs

Param:

scale: Scaling factor of ROIs based on difference in size of RGB and D images

Returns:

joints: List of joints of type Joint

get_person_tags(skeleton)[source]

Method to get tags for a skeleton. The possible elements of the tag list are:

  1. LWaving/LPointing | RWaving/RPointing

  2. LLaying/LSitting | RLaying/RSitting

  3. LHolding | RHolding

  4. LNotHolding | RNotHolding

Param:

skeleton: The filtered skeleton of a person

Returns:

tags: List of tags for the person

Parameters:

skeleton (Skeleton) –

Return type:

List[str]

get_pointing_pose(skeleton)[source]
Parameters:

skeleton (Skeleton) –