🍒YOLO_Multi_Crop✀多人物裁切:
The YOLO_Multi_Crop node is designed to enhance image processing by leveraging the YOLO (You Only Look Once) model for object detection and cropping. This node is particularly useful for AI artists who want to focus on specific objects within an image, such as people or other predefined categories, by automatically detecting and cropping these objects from the input image. The primary goal of this node is to simplify the process of isolating objects of interest, allowing you to work with these cropped images directly. By using the YOLO model, the node provides a robust and efficient method for object detection, ensuring high accuracy and speed. This capability is especially beneficial in scenarios where you need to process multiple images or require precise cropping for further artistic manipulation or analysis.
🍒YOLO_Multi_Crop✀多人物裁切 Input Parameters:
image
The image parameter is the input image that you want to process. It must be provided as a tensor, and the node will handle the conversion to the appropriate format for YOLO processing. This parameter is crucial as it serves as the source from which objects will be detected and cropped.
yolo_model
The yolo_model parameter specifies the YOLO model file to be used for object detection. You can choose from a list of available model files, which are preloaded or cached for efficiency. This parameter determines the accuracy and type of objects that can be detected, as different models may be trained on different datasets.
confidence
The confidence parameter sets the threshold for object detection confidence. It is a float value ranging from 0.1 to 1.0, with a default of 0.5. This parameter impacts the sensitivity of the detection process; a higher value means only objects with higher detection confidence will be considered, reducing false positives.
square_size
The square_size parameter defines the size of the cropped area as a percentage of the image dimensions. It is a float value ranging from 10.0 to 200.0, with a default of 100.0. This parameter allows you to control the size of the cropped images, which can be useful for ensuring consistency in the output size or focusing on specific details.
max_detections
The max_detections parameter limits the number of objects that can be detected and cropped from the image. It is an integer value ranging from 1 to 20, with a default of 5. This parameter helps manage the output by preventing an overwhelming number of crops, which can be useful when processing images with many detectable objects.
🍒YOLO_Multi_Crop✀多人物裁切 Output Parameters:
IMAGE
The IMAGE output parameter provides a list of cropped images, each corresponding to a detected object in the input image. These cropped images are ready for further artistic manipulation or analysis, allowing you to focus on specific elements of interest within the original image.
DATA
The DATA output parameter contains metadata about the detected objects, such as their bounding box coordinates and confidence scores. This information is valuable for understanding the context of the detected objects and can be used for further processing or decision-making in your workflow.
🍒YOLO_Multi_Crop✀多人物裁切 Usage Tips:
- Adjust the
confidenceparameter to balance between detection accuracy and the number of objects detected. A higher confidence threshold reduces false positives but may miss less prominent objects. - Use the
square_sizeparameter to ensure that the cropped images meet your specific size requirements, especially if you need uniformity across multiple images.
🍒YOLO_Multi_Crop✀多人物裁切 Common Errors and Solutions:
输入的 image 不是 torch.Tensor 类型
- Explanation: This error occurs when the input image is not provided as a tensor, which is the expected format for processing.
- Solution: Ensure that the input image is converted to a tensor format before passing it to the node.
Failed to load YOLO model: <error_message>
- Explanation: This error indicates that the specified YOLO model could not be loaded, possibly due to an incorrect file path or a corrupted model file.
- Solution: Verify that the model file path is correct and that the file is not corrupted. Ensure that the model file is compatible with the node's requirements.
