Visit ComfyUI Online for ready-to-use ComfyUI environment
Facilitates human pose detection and analysis using deep learning models, extracting body, hands, and face keypoints for interactive visual content creation.
The tri3d-dwpose
node is designed to facilitate the detection and analysis of human poses in images, leveraging advanced deep learning models. Its primary purpose is to identify and extract keypoints related to the human body, hands, and face, providing a comprehensive understanding of human posture and movement. This node is particularly beneficial for AI artists and developers who wish to incorporate pose estimation into their projects, enabling the creation of more interactive and dynamic visual content. By utilizing pre-trained models, tri3d-dwpose
offers a robust solution for pose detection, ensuring high accuracy and efficiency. The node's capabilities extend to detecting multiple poses within a single image, making it a versatile tool for various applications, from animation to augmented reality.
This parameter determines whether the node should include hand keypoints in the pose detection process. When set to "enable," the node will analyze and return keypoints for both the left and right hands, providing detailed hand pose information. This can be particularly useful for applications requiring precise hand movements, such as sign language recognition or gesture-based controls. The default value is typically "disable," focusing on body and face keypoints unless specified otherwise.
This parameter controls the inclusion of body keypoints in the pose detection. By setting it to "enable," the node will detect and return keypoints for the entire body, capturing the overall posture and movement. This is essential for applications that require full-body analysis, such as fitness tracking or dance choreography. The default setting is "enable," as body keypoints are often the primary focus in pose estimation tasks.
This parameter specifies whether the node should detect and return facial keypoints. When enabled, the node will provide detailed facial pose information, which can be crucial for applications involving facial expression analysis or avatar creation. The default value is "disable," focusing on body and hand keypoints unless facial analysis is required.
The PoseResult
output provides a structured representation of the detected poses, including keypoints for the body, hands, and face. Each PoseResult
contains a BodyResult
with decompressed keypoints for the body, as well as separate keypoints for the left hand, right hand, and face. This output is essential for interpreting the pose data, allowing users to visualize and utilize the detected keypoints in their applications. The output is typically a list of PoseResult
objects, one for each detected person in the image.
This output parameter indicates the height of the processed image. It is useful for scaling and aligning the detected keypoints with the original image dimensions, ensuring accurate representation and analysis.
Similar to the height parameter, the width output provides the width of the processed image. It aids in maintaining the correct aspect ratio and alignment of the keypoints with the image, facilitating precise pose visualization and manipulation.
detect_hand
parameter when your project requires detailed hand movements, such as in virtual reality applications or interactive installations.detect_face
parameter for projects focusing on facial expressions or avatar creation, ensuring you capture all necessary facial keypoints.DWPOSE_MODEL_NAME
is not available in the cache directory.annotator_ckpts_path
directory. Verify the model name and path for any discrepancies.RunComfy is the premier ComfyUI platform, offering ComfyUI online environment and services, along with ComfyUI workflows featuring stunning visuals. RunComfy also provides AI Playground, enabling artists to harness the latest AI tools to create incredible art.