ai_detection¶
This node takes care of object detection and hosts a Flask server for the VR headset to fetch images from. The node also has access to the two cameras (drive and arm). It switches between these two depending on the current mode.
Subscribed topics
switch_camera (SwitchCamera): Camera to switch to including information for what to show.
Published topics
message (Message): Log information.
detection¶
- class robot.src.ai_detection.ai_detection.detection.ObjectDetection(*args: Any, **kwargs: Any)¶
ObjectDetection class for the object detection application
- _switch_camera(camera: Camera) None ¶
Switches the camera to the specified camera.
- Parameters:
camera – Camera to switch to
- handle_switch_camera(msg) None ¶
Handles switch camera message.
- Parameters:
msg – SwitchCamera message
- save_result(result: mediapipe.tasks.python.vision.ObjectDetectorResult, unused_output_image: mediapipe.Image, timestamp_ms: int) None ¶
Save the detection result to the global variable
- Parameters:
result (vision.ObjectDetectorResult) – The detection result
unused_output_image (mp.Image) – The output image
timestamp_ms (int) – The timestamp in milliseconds
- inference(class_name: str, max_results=5, score_threshold=0.8) None ¶
Perform the object detection inference
- Parameters:
class_name – The class name
max_results – The maximum number of results
score_threshold – The score threshold
- _detect_objects(frame)¶
Detect objects in the frame
- Parameters:
frame – The input frame
- Returns:
The frame with the detected objects
- __retreive_info(object_name: str) dict ¶
Retrieve all information for the object from the database
- Parameters:
object_name – Name of the object
- Returns:
Dictionary containing all information for the object
- Return type:
all_info_dict
- websocket_handler(ws) None ¶
- generate_frame() str ¶
Generate the frames for the object detection’s snapshot.
- robot.src.ai_detection.ai_detection.detection.main(args=None)¶
database¶
utils¶
- class robot.src.ai_detection.ai_detection.utils.Camera(value, names=<not given>, *values, module=None, qualname=None, type=None, start=1, boundary=None)¶
- DRIVE = '/dev/v4l/by-id/usb-Sonix_Technology_Co.__Ltd._Astra_Pro_HD_Camera-video-index0'¶
- ARM = '/dev/v4l/by-id/usb-Sonix_Technology_Co.__Ltd._USB_2.0_Camera-video-index0'¶
- robot.src.ai_detection.ai_detection.utils.get_path(subpath: str) str ¶
Get the path of the file.
- Parameters:
subpath – Subpath of the file
- Returns:
Path of the file
- robot.src.ai_detection.ai_detection.utils.get_config() dict ¶
Get the configuration.
- Returns:
config dict
- robot.src.ai_detection.ai_detection.utils.visualize(image, detection_result) numpy.ndarray ¶
Draws bounding boxes on the input image and return it. :param image: The input RGB image. :param detection_result: The list of all “Detection” entities to be visualized.
- Returns:
Image with bounding boxes. category_name: The category name of the detected object.