Here you can find information on how best to use our Generate Anything tool to ensure you get the best possible results for your model generation, whether you choose to use image or text generation.
Images with busy backgrounds may cause issues with generation, including but not limited to unwanted textures and mesh misshaping. To avoid this, it is best to remove all backgrounds and keep the image focus entirely on the subject for your model.
Additionally, try to avoid having more than one subject in the image, otherwise the tool may not be able to determine which source to use for the model.
Try your best to ensure the source for the model in your image has all limbs visibly separated in some way (e.g. gaps between legs, arms) to ensure a clean, concise generation.
Higher resolution images may take a little longer to generate but will likely provide higher texture quality on the finished model.
If you are unable to separate your model source from the background, it is best that the background and subject have a contrast between them to help determine the model source. This can be done by having a bright background, or by increasing the intensity of colour, sharpness and detail of the model in the image.
Remember to consider that when using an image with a background there is always a chance that some of the background image may leak into the final texture, or some mesh irregularities may occur, especially in cases where the model source and background have similar colours.
Be wary of artifacts in your images, especially in white or empty space, and these may create anomalies in your model generation.
Ensure your text prompts are clear and concise. Spelling errors may cause issues with generation, so be sure to proof read your prompt before generation.
When writing your text prompts, the level of detail matters depending on what you're after. For example, if you're looking for a generic cat model, you can simply use the prompt "cat". However, if you require specific details, you can include this in the prompt - for example "cat with a fluffy tail" would be some slight further detail specific to your generation.
You can take this further by being more precise, although generations with excessive detail may vary in quality depending on the category. Another example would be "pink cat with tiger print fur" which contains much more precise detail.
Colours can also be defined using text prompts, and this can also create variations in the result depending on the detail used. Using more precise colour specifics ("dark blue" "light blue") can be helpful, but general colours are likely to yield better results. Note that hex colour codes will not work.
Some 'aesthetics' may be used in text prompts (e.g. cyberpunk, vaporwave, grunge) however these are not always guaranteed to be picked up by the system or may provide varying results, and are much less reliable that using colour terms.
Generating models you want to animate with Animate Anything? Here are some tips to help ensure your model will meet the constraints of the Animate Anything system.
Make sure your model's category is supported by Animate Anything, you can find this out by referencing this table.
For all models, you need to ensure they still meet the constraints of our systems. For text prompt generated models, it can be useful to include the term 'symmetrical' (or just keeping the prompt refinement checkbox ticked) and for image generated models, having a subject with a visibly symmetrical shape can be helpful.
For biped human and humanoid models, the pose can be really important. We recommend generated models aim for an A-pose for these categories.
If you're using Text-To-3D, you can select the optional prompt refinement checkbox to default to an A-pose for your model when generating humans and humanoids.
For further information on preparing a model for Animate Anything, you can take a look at our other documentation.