πŸ“„Documentation

Welcome to the Anything World API!

You can use the Anything World's API in your application to request 3D models and other data based on different search criteria.

Python users can also use a package directly for easier integration. It's available at https://pypi.org/project/anything-world/ and can be installed with pip install anything-world.

Basic notions

The API is divided in the library API (used to search and obtain models from our 3D asset library) and the processing API (used to rig, animate or otherwise process your own 3D models using our AI system). However, both use the same API key.

API key

After generating the API key in our Dashboard, the API key can be used to authorize requests to the Anything World's API.

Bear in mind that many endpoints require the API to generate results.

General Format

https://api.anything.world/anything?key=<API_KEY>&name=<NAME>

Example

https://api.anything.world/anything?key=<API_KEY>&name=bird

3D Model Library

The following requests can be used to download 3D models from our library. Those are the same models available in our gallery. Here you can find models animated by our AI pipeline as well as static ones, and they are ideal for creating new games or entire 3D worlds without having to design nor animate by hand.

​Get a specific model by name

GET https://api.anything.world/anything?key=<API_KEY>&name=<NAME>

Retrieve a single model (JSON response) based on its name. If there is an exact match it will be returned, otherwise a similar result (if found), or no model otherwise.

Query Parameters

Name
Type
Description

key*

String

API key

name*

String

Name to search, e.g. dog

fuzzy

Boolean

Whether to enable approximate matching (true by default)

{
"_id": "5e958b911c9d440000f9771d",
"name": "dog",
"creature": "dog",
"group": "mammals",
"type": "quadruped",
"behaviour": "walk",
"pet": true,
"detail": "medium",
"model": {
"parts": {
"body": "https://assets.anything.world/dog/body.obj?Expires=1629890114&KeyName=models&Signature=phbXMpLsJBzNDTXQAdJW7phnvSg",
"head": "https://assets.anything.world/dog/head.obj?Expires=1629890114&KeyName=models&Signature=h6t0BWNXhCoB3DXP0-eNnKULCJQ",
"leg_front_left_bot": "https://assets.anything.world/dog/leg_front_left_bot.obj?Expires=1629890114&KeyName=models&Signature=K9W3wfPT9WhpCH6jZXRRnVMN7G4",
"leg_front_left_top": "https://assets.anything.world/dog/leg_front_left_top.obj?Expires=1629890114&KeyName=models&Signature=AfTkg0KIlEWw-CFtJM0yLcrVVCw",
"leg_front_right_bot": "https://assets.anything.world/dog/leg_front_right_bot.obj?Expires=1629890114&KeyName=models&Signature=TcYVhFISHsV1FWtu0OVdSe0rYQU",
"leg_front_right_top": "https://assets.anything.world/dog/leg_front_right_top.obj?Expires=1629890114&KeyName=models&Signature=GP8CokHpOUM93kbzv9rhWWWrAxY",
"leg_hind_left_bot": "https://assets.anything.world/dog/leg_hind_left_bot.obj?Expires=1629890114&KeyName=models&Signature=Xit_bZ8J7IiVY8EqO4cQLw-7twU",
"leg_hind_left_top": "https://assets.anything.world/dog/leg_hind_left_top.obj?Expires=1629890114&KeyName=models&Signature=2PASF7b7Tj-JPARj80hPAzuxohM",
"leg_hind_right_bot": "https://assets.anything.world/dog/leg_hind_right_bot.obj?Expires=1629890114&KeyName=models&Signature=OqWkuXy9sqiOcpqd9vGBb8PNAOo",
"leg_hind_right_top": "https://assets.anything.world/dog/leg_hind_right_top.obj?Expires=1629890114&KeyName=models&Signature=c7ZU1jQz-GyamcYyjVYlaPfjpcM",
"tail": "https://assets.anything.world/dog/tail.obj?Expires=1629890114&KeyName=models&Signature=CBsGk-k-zL096jbaCdip7Rr0tt8"
},
"other": {
"material": "https://assets.anything.world/dog/dog.mtl?Expires=1629890114&KeyName=models&Signature=y7vPL9ZbY0QJwLZuVGkThNBu8x0",
"model": "https://assets.anything.world/dog/dog.obj?Expires=1629890114&KeyName=models&Signature=qnIGLYo3mXN2NnVMhuWOBMxxKOU",
"texture": [
"https://assets.anything.world/dog/basecolor.png?Expires=1629890114&KeyName=models&Signature=oNIXgYIHMletKl009yrow_zkAHk"
],
"reference": "https://assets.anything.world/dog/reference.png?Expires=1629890114&KeyName=models&Signature=EF1MrK08ZLGpguI_8r-kCwysWcM"
}
},
"author": "Poly by Google",
"source": "AW-API",
"forest": false,
"beach": false,
"city": true,
"desert": false,
"farm": false,
"sea": false,
"icescape": false,
"jungle": false,
"lake": false,
"pond": false,
"river": false,
"swamp": false,
"grass": false,
"rural": true,
"urban": false,
"garden": true,
"scale": {
"height": "1.2m"
},
"source_endpoint": "aw_api"
}

Get all models matching the query

GET https://api.anything.world/anything?key=<API_KEY>&search=<QUERY>

Get a JSON array representing all models that matched the query string, which can be anything, such as a name (e.g. "cat"), a tag (e.g. "Christmas"), a habitat (e.g. "jungle" or "desert") or a taxonomical category (e.g. "animal" or "object"). The response is a JSON with an array of objects, where each object represents the data of a model, in the same format as that of the endpoint for obtaining a specific model, shared above.

Query Parameters

Name
Type
Description

key*

String

Your API key

search*

String

Search query

fuzzy

Boolean

Whether approximate matches are returned (default is true)

{
    // Response
}

AI Model Processing

These requests give you access to our AI pipeline. With them you'll be able to animate your own 3D models. To know which types of models we support and other requirements that should be followed to have a proper animation, please refer to this page. At the end, we also present endpoints to generate 3D models from text or image.

Automatically rig and animate a given 3D model

POST https://api.anything.world/animate

Get a JSON response with a "model_id" representing an unique identifier for the 3D model being animated. It’s important to note that, because we’re sending files through the request, all parameters and files should be encoded as formdata.

The files should be sent as the value of the files key, no matter the amount of files. As value, a list of tuples should be provided with each tuple containing: (file_name, file_content, content_type). The file names should have the expected extension and content-type. Currently we support the following file types:

Meshes: OBJ, GLB, GLTF, FBX and DAE

Materials: MTL

Textures: JPG, PNG, BMP, GIF, TIF, TGA and TARGA

As content-type, please make sure your files have the expected one:

Make sure that all your mesh, materials and texture files are referencing each other as expected. For example, a material file named "foo.mtl" should be properly referenced inside a "model.obj". The rule of thumb is: if you are able to open your model in a 3D editor like Blender and it shows all expected materials and textures bindings, it should work with our API as well.

It's also possible to send a ZIP file as your model data. Please make sure to include all the files in a single folder (please, don't use subfolders e.g. for materials, etc) and compress it as a valid .zip file (with the right content-type).

This request is asynchronous and follows a polling strategy, meaning that a response will be returned by the endpoint as fast as possible (in a few seconds). The animation process though takes time (approx. 10 minutes). Please, use the generated "model_id" as parameter for a request to https://api.anything.world/user-processed-model endpoint to get the current status of the model being animated. When the request returns the final stage, the model should have been animated with success and the generated rigged files will be available to download. Please check the documentation for the https://api.anything.world/user-processed-model endpoint, it's just bellow, in the next section.

To better illustrate this process, we provide an open source Python library that can be used as example on how to communicate with our API: https://github.com/anythingworld/anything-world-python. Given it has minimal requirements, this implementation can be easily translated to other programming languages.

It's important to note that you have a free allowance of model processing credits which renews every month. Additional credits can be purchased in your profile page.

Request Body

Name
Type
Description

key*

String

Your API key

model_name*

String

A name to be given to the model

model_type*

Type of the given model (e.g. "cat"). This parameter is only mandatory ifauto_classify=false. If model_type is provided, text-based classification is always performed. If model_type is not provided, auto_classify must be true or otherwise an error is given since the system does not have enough information to classify the model. If model_type is not provided and auto_classify=true, then 3D classification is performed instead of the text-based. It is recommended to provide model_type over auto_classify as it is more accurate, especially for best results in animal animation.

can_use_for_internal_improvements

String

"true" if user allows AW to use the model for internal improvements (ML training, etc) and "false" otherwise. If not specified, it defaults to "false"

author

String

Name of the author of the model. If not specified, it defaults to an empty string

license

String

License attributed to the model. Either "cc0", "ccby" or "mit". If not specified, it defaults to "ccby"

symmetry*

String

"true" if the model is symmetric, "false" otherwise

files*

String

Files to be sent to be animated, encoded as a list of tuples like (file_name, file_content, content_type).

auto_classify

String

"true" if user allows AW to automatically classify the model during processing and "false" otherwise. If not specified, it defaults to "false". This parameter becomes mandatory if model_type is not specified, as text-based classification needs to be performed from the model type then, please see the notes for that parameter.

auto_rotate

String

"true" if user allows AW to automatically rotates the model during processing and "false" otherwise. If not specified, it defaults to "false".

{
    // Response
}

Generate extra animations

POST https://api.anything.world/animate-processed

This endpoint is designed to create additional animations beyond the base ones generated by invoking the animate endpoint. Please only call this endpoint with the id of a model which has already been fully processed by the animate endpoint, having all the base animations already. As of now, the only model types supporting extra animations are humanoids (women, men, etc.), cats, dogs and horses (mentioned in the table below). You can call this endpoint multiple times, but each animation will only be attempted to generate once.

Supported extra animations

Category
Supported animation names

Humanoids

Walk: 'walk_back', 'zombie_walk'

Jump: 'running_jump', 'running_jump_end', 'running_jump_fall', 'running_jump_start'

Advanced Animations: 'crouch', 'crouch_walk', 'crawl', 'dance', 'die', 'eat', 'get_up_from_crouch', 'get_up_from_lying_down', 'get_up_from_sitting', 'lie_down', 'lying_down_idle', 'sit', 'sit_down_to_drive', 'sitting_eat', 'sitting_idle'

Communicate: 'shout', 'talk', 'wave' 

Fight: 'arm_parry', 'axe_attack', 'axe_draw', 'axe_idle', 'axe_parry', 'axe_sheathe', 'cross_punch_left', 'cross_punch_right', 'kick_left', 'kick_right', 'punch_left', 'punch_right', 'sword_draw', 'sword_idle', 'sword_parry', 'sword_sheathe'

Cats

Advanced Animations: 'get_up_from_lying_down', 'lie_down', 'lying_down_idle'

Fight: 'bite_attack', 'paw_attack'

Dogs

Advanced Animations: 'get_up_from_lying_down', 'get_up_from_sitting', 'give_paw', 'lie_down', 'lying_down_idle', 'sit', 'sitting_idle', 'sleep'

Fight: 'bite_attack', 'paw_attack', 'threaten'

Horses

Walk: 'canter', 'trot' 

Advanced Animations: 'raise_up', 'shake_head', 'stomp'

Fight: 'head_attack', 'kick'

Request Body

Name
Type
Description

key*

String

Your API key

id*

String

Model id (returned by the https://api.anything.world/animate endpoint, for instance)

animation_names*

Array

Array of animation names in string. e.g. ["running_jump", "sleep"]

Request Response

OK Response.

{
    "model_id": "<hash string identifying the model>"
}

Automatically rig a given 3D model (not including animations)

POST https://api.anything.world/rig

This endpoint is pretty similar to the /animate endpoint above, however it will only generate the rigging of the provided 3D model instead of also generating all the animations.

Get a JSON response with a "model_id" representing an unique identifier for the 3D model being rigged. It’s important to note that, because we’re sending files through the request, all parameters and files should be encoded as formdata.

The files should be sent as the value of the files key, no matter the amount of files. As value, a list of tuples should be provided with each tuple containing: (file_name, file_content, content_type). The file names should have the expected extension and content-type. Currently we support the following file types:

Meshes: OBJ, GLB, GLTF, FBX and DAE

Materials: MTL

Textures: JPG, PNG, BMP, GIF, TIF, TGA and TARGA

As content-type, please make sure your files have the expected one:

Make sure that all your mesh, materials and texture files are referencing each other as expected. For example, a material file named "foo.mtl" should be properly referenced inside a "model.obj". The rule of thumb is: if you are able to open your model in a 3D editor like Blender and it shows all expected materials and textures bindings, it should work with our API as well.

It's also possible to send a ZIP file as your model data. Please make sure to include all the files in a single folder (please, don't use subfolders e.g. for materials, etc) and compress it as a valid .zip file (with the right content-type).

This request is asynchronous and follows a polling strategy, meaning that a response will be returned by the endpoint as fast as possible (in a few seconds). The rigging process though takes time (approx. 3 minutes). Please, use the generated "model_id" as parameter for a request to https://api.anything.world/user-processed-model endpoint to get the current status of the model being rigged. When the request returns the final stage, the model should have been rigged with success and the generated rigged files will be available to download. Please check the documentation for the https://api.anything.world/user-processed-model endpoint, it's just bellow, in the next section.

To better illustrate this process, we provide an open source Python library that can be used as example on how to communicate with our API: https://github.com/anythingworld/anything-world-python. Given it has minimal requirements, this implementation can be easily translated to other programming languages.

It's important to note that you have a free allowance of model processing credits which renews every month. Additional credits can be purchased in your profile page.

Request Body

Name
Type
Description

key*

String

Your API key

model_name*

String

A name to be given to the model

model_type*

Type of the given model (e.g. "cat"). This parameter is only mandatory ifauto_classify=false. If model_type is provided, text-based classification is always performed. If model_type is not provided, auto_classify must be true or otherwise an error is given since the system does not have enough information to classify the model. If model_type is not provided and auto_classify=true, then 3D classification is performed instead of the text-based. It is recommended to provide model_type over auto_classify as it is more accurate, especially for best results in animal animation.

can_use_for_internal_improvements

String

"true" if user allows AW to use the model for internal improvements (ML training, etc) and "false" otherwise. If not specified, it defaults to "false"

author

String

Name of the author of the model. If not specified, it defaults to an empty string

license

String

License attributed to the model. Either "cc0", "ccby" or "mit". If not specified, it defaults to "ccby"

symmetry*

String

"true" if the model is symmetric, "false" otherwise

files*

String

Files to be sent to be rigged, encoded as a list of tuples like (file_name, file_content, content_type).

auto_classify

String

"true" if user allows AW to automatically classify the model during processing and "false" otherwise. If not specified, it defaults to "false". This parameter becomes mandatory if model_type is not specified, as text-based classification needs to be performed from the model type then, please see the notes for that parameter.

auto_rotate

String

"true" if user allows AW to automatically rotates the model during processing and "false" otherwise. If not specified, it defaults to "false".

Request Response

OK Response. Created the model and returns the id for it.

{
    "model_id": "<hash string identifying the model>"
}

Get a model state processed by the AI pipeline

GET https://api.anything.world/user-processed-model?key=<API_KEY>&id=<MODEL_ID>&stage=<MODEL_STAGE>

Get a JSON response representing the current state of a model being processed by the AI pipeline. If a model has completed the stage specified in the query (or is fully processed if the stage is not given), the response will be a model entry in similar format as the returned JSON of https://api.anything.world/anything endpoint. An error is returned if the model faced a processing error or if the model has not reached yet the asked stage or is past it.

Note that you can speed up your request by setting the stage parameter in a strategic manner, depending on your use case:

  • If you don't need other formats than the basic ones provided, you don't need to specify a stage parameter (or specify stage="done"). This way you'll receive a .obj file for static and vehicle models; and .fbx and .glb for animated models. Our AI pipeline will return the models as soon as they are processed, however it will keep performing formats conversion and thumbnails generation on background, and the stage of the model will change for formats_conversion_finished and thumbnails_generation_finished when those operations are done. This is the preferred way to get a result as fast as possible;

  • If you care about having the results in .fbx and .glb for static and vehicle models as well as .dae and .gltf formats for all models, please request with stage="formats_conversion_finished". This will take longer to your request to finish, but you'll be able to have extra formats. You will still get some speed up by not having to wait for thumbnails generation to finish;

  • If you want to receive not only the extra formats but also thumbnail images representing previews of the 3D model, please use stage="thumbnails_generation_finished". This is the longest request and will not give you any speed up when compared with the other previous options.

Within the JSON structure, the model rig sub-object can be found in model.rig. At its raw level, you can find the rigged mesh file in the formats cited above, e.g. model.rig.FBX contains the FBX version of the rig. The animations can be found in model.rig.animations, with a sub-object in the JSON for each animation, and inside that, a field for each format, e.g. model.rig.animations.walk.FBX contains the walk animation (only available for some categories) in FBX format. If the model is not rigged and animated but a vehicle, you can find the separated parts in the model.parts sub-object in the JSON response.

It's important to note that you have a free allowance of model processing credits which renews every month. Additional credits can be purchased in your profile page.

Query Parameters

Name
Type
Description

key*

String

Your API key

id*

String

Model id (returned by the https://api.anything.world/animate endpoint, for instance)

stage

String

Stage to check. The model JSON will only be returned if the stage is exactly matched. If not specified, it only checks to see if the model is fully processed.

{
    // Response
}

Get all models processed by the AI pipeline for the user

GET https://api.anything.world/user-processed-models?key=<API_KEY>

Get a JSON response with an array of data from all the models processed with the AI pipeline by the user.

Each element of the array follows the same format of a processed model (rigged and animated model if the category allowed, otherwise part-split vehicle or static mesh) as described in the endpoint user-processed-model (which you can check for more context, just note that in the endpoint described now you do not have the further control of the stage parameter).

Within the JSON structure of an element of the array (that is, a processed model), the rig data sub-object can be found in model.rig. At its raw level, you can find the rigged mesh file in the formats cited above, e.g. model.rig.FBX contains the FBX version of the rig. The animations can be found in model.rig.animations, with a sub-object in the JSON for each animation, and inside that, a field for each format, e.g. model.rig.animations.walk.FBX contains the walk animation (only available for some categories) in FBX format. If the model is not rigged and animated but a vehicle, you can find the separated parts in the model.parts sub-object in the JSON response.

It's important to note that you have a free allowance of model processing credits which renews every month. Additional credits can be purchased in your profile page.

Query Parameters

Name
Type
Description

key*

String

Your API key

{
    // Response
}

Generate model from text

POST https://api.anything.world/text-to-3d

Get a JSON response with a "model_id" representing an unique identifier for the 3D model being animated.

This request is asynchronous and follows a polling strategy, meaning that a response will be returned by the endpoint as fast as possible (in a few seconds). The generation process though takes time (approx. 6 minutes). Please, use the generated "model_id" as parameter for a request to https://api.anything.world/user-generated-model endpoint to get the current status of the model being generated. When the request returns the final stage, the model should have been animated with success and the generated files will be available to download. Please check the documentation for the https://api.anything.world/user-generated-model endpoint, it's just bellow, in the next section.

To better illustrate this process, we provide an open source Python library that can be used as example on how to communicate with our API: https://github.com/anythingworld/anything-world-python. Given it has minimal requirements, this implementation can be easily translated to other programming languages.

It's important to note that you have a free allowance of model generation credits which renews every month. Additional credits can be purchased in your profile page.

Request Body

Name
Type
Description

key*

String

Your API key

text_prompt*

String

Text input that directs the model generation.

  • The maximum prompt length is 1024 characters, equivalent to approximately 100 words.

refine_prompt

Boolean

Refine the prompt to encourage a standard pose that will facilitate animation. It is treated as true if not specified.

can_use_for_internal_improvements

Boolean

true if user allows AW to use the model for internal improvements (ML training, etc.) and false otherwise. If not specified, it defaults to false.

can_be_public

Boolean

true if user allows to make the model available for public to use, false otherwise. If not specified, it defaults to false.

{
    "model_id": "<hash string identifying the model>"
}

Generate model from image

POST https://api.anything.world/image-to-3d

Get a JSON response with a "model_id" representing an unique identifier for the 3D model being generated. It’s important to note that, because we’re sending files through the request, all parameters and files should be encoded as formdata.

This endpoint is pretty similar to the /text-to-3d endpoint above, however it will generate the 3D model via image.

The file should be sent as the value of the files key. As value, a list of tuples should be provided with each tuple containing: (file_name, file_content, content_type). The file name should have the expected extension and content-type. Currently we support the following file types:

Image files: JPG, PNG and JPEG

Request Body

Name
Type
Description

key*

String

Your API key

model_name

String

A name to be given to the model, if not provided, system will take the file name as model_name

files*

String

Specifies the image input. Encode file as a list of tuples like (file_name, file_content, content_type).

can_use_for_internal_improvements

String

"true" if user allows AW to use the model for internal improvements (ML training, etc.) and "false" otherwise. If not specified, it defaults to "false".

can_be_public

String

"true" if user allows to make the model available for public to use, "false" otherwise. If not specified, it defaults to "false".

{
    "model_id": "<hash string identifying the model>"
}

Get generated model

GET https://api.anything.world/user-generated-model?key=<API_KEY>&id=<MODEL_ID>

Get a JSON response representing the current state of a model being generated by the mesh generation system. If a model is fully generated, the response will be a model entry in similar format as the returned JSON of https://api.anything.world/anything endpoint. An error is returned if the model faced a processing error or if the model has not yet generated.

Within the JSON structure, you will find the mesh file in model.mesh.glb Format Support Notice: Only GLB is supported in Generate Anything. For additional formats like DAE, FBX, GLTF, OBJ, PLY, STL, X3D, and USDZ, use the /animate endpoint with the category set to static.

It's important to note that you have a free allowance of model processing credits which renews every month. Additional credits can be purchased in your profile page.

The generated model files can then be submitted to /animate if you want to get the model animated.

Query Parameters

Name
Type
Description

key*

String

Your API key

id*

String

Model id (returned by the https://api.anything.world/text-to-3d or https://api.anything.world/image-to-3d endpoint, for instance)

{
    // Response
}

Get all models generated for the user

GET https://api.anything.world/user-generated-models?key=<API_KEY>

Get a JSON response with an array of data from all the models generated with the mesh generation system by the user.

Each element of the array follows the same format of a generated model as described in the endpoint user-generated-model

Within the JSON structure, At its raw level, you can find the mesh file in the formats cited above, e.g. model.mesh.fbx contains the FBX version of the generated model..

It's important to note that you have a free allowance of model processing credits which renews every month. Additional credits can be purchased in your profile page.

Query Parameters

Name
Type
Description

key*

String

Your API key

{
    // Response
}

Example: generate and animate a 3D model

This example demonstrates how to use the /text-to-3d and /animate endpoints in sequence to generate a 3D model, poll for its status, and animate it once the model has been generated.

Prerequisites

  • Node.js installed.

  • axios for making HTTP requests (npm install axios).

  • API credentials (API key).

Code Example

const axios = require('axios');
const FormData = require("form-data");

// Configuration
const API_BASE_URL = process.env.API_BASE_URL || "https://api.anything.world"; // Environment variable fallback
const API_KEY = process.env.API_KEY || "your-api-key";
const MAX_ATTEMPTS = 10;
const POLL_INTERVAL_MS = 2000;

// Constants
const STATUS_CODE_STILL_PROCESSING = 403;
const CODE_STILL_PROCESSING = "Still Processing";

// Function to generate a 3D model
async function generateModel(prompt, refinePrompt = true) {
  try {
    const response = await axios.post(`${API_BASE_URL}/text-to-3d`, {
      text_prompt: prompt,
      key: API_KEY,
      refine_prompt: refinePrompt,
    });
    return response.data.model_id;
  } catch (error) {
    console.error("Error during model generation:", error.response?.data || error.message);
    throw new Error("Failed to generate model.");
  }
}

// Function to poll for status
async function pollStatus(endpoint, id, stage) {
  let attempts = 0;

  while (attempts < MAX_ATTEMPTS) {
    try {
      const url = new URL(`${API_BASE_URL}/${endpoint}`);
      url.searchParams.append("key", API_KEY);
      url.searchParams.append("id", id);
      if (stage) {
        url.searchParams.append("stage", stage);
      }

      const response = await axios.get(url.toString());
      console.log(`Polling attempt ${attempts + 1}: Status =`, response.data);

      return response.data; // Return data if the request succeeds
    } catch (error) {
      const statusCode = error.response?.status;
      const errorCode = error.response?.data?.code;

      if (statusCode === STATUS_CODE_STILL_PROCESSING && errorCode === CODE_STILL_PROCESSING) {
        console.log(
          `Polling attempt ${attempts + 1} for ID ${id}: Still Processing... Retrying in ${
            POLL_INTERVAL_MS / 1000
          } seconds.`
        );
      } else {
        console.error(
          `Polling attempt ${attempts + 1} failed for ID ${id}:`,
          error.response?.data || error.message
        );

        if (attempts === MAX_ATTEMPTS - 1) {
          throw new Error(`Polling for ${endpoint} (ID: ${id}) timed out or failed.`);
        }
      }
    }

    attempts++;
    await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS));
  }

  throw new Error(`Polling for ${endpoint} (ID: ${id}) timed out.`);
}

// utility function to get file name from url
function getFileNameFromUrl(url) {
    return url.substring(url.lastIndexOf('/') + 1, url.indexOf('?') !== -1 ? url.indexOf('?') : url.length);
}

// Download files from signed URLs and store as blob in memory
async function getFilesAndAppendToFormData(modelData) {
    // Download files in memory using axios
    const objFile = await axios.get(modelData['model'].mesh.obj, { responseType: 'blob' });
    const mtlFile = await axios.get(modelData['model'].material, { responseType: 'blob' });
    let textureFiles = [];
    if (modelData['model'].textures && modelData['model'].textures.length > 0) {
        // Download textures in memory using axios
        textureFiles = await Promise.all(
            modelData['model'].textures.map(async (textureUrl, index) => {
                const response = await axios.get(textureUrl, { responseType: 'blob' });
                // Extract the file name from the URL
                const fileName = getFileNameFromUrl(textureUrl);
                return new File([response.data], fileName);
            })
        );
    }
    const formData = new FormData();
    formData.append('files', new File([objFile.data], getFileNameFromUrl(freshlyGeneratedModel.objUrl)));
    formData.append('files', new File([mtlFile.data], getFileNameFromUrl(freshlyGeneratedModel.mtlUrl)));
    if (textureFiles.length > 0) {
        textureFiles.forEach(textureFile => formData.append('files', textureFile));
    }
    return formData;
}

// Function to animate a model
async function animateModel(modelData, modelName, modelType, options = {}) {
  try {
    const formData = getFilesAndAppendToFormData(modelData);

    // Append required fields
    formData.append("key", API_KEY);
    formData.append("model_name", modelName);
    formData.append("model_type", modelType);

    // Append optional fields with defaults
    formData.append("symmetry", options.symmetry || "false");
    formData.append("auto_classify", options.auto_classify || "false");
    formData.append("auto_rotate", options.auto_rotate || "false");
    formData.append("can_use_for_internal_improvements", options.can_use_for_internal_improvements || "false");
    formData.append("author", options.author || "");
    formData.append("license", options.license || "ccby");

    // Make the POST request to the animate endpoint
    const response = await axios.post(`${API_BASE_URL}/animate`, formData, {
      headers: {
        ...formData.getHeaders(),
      },
    });

    console.log("Animation request successful:", response.data);

    return response.data.model_id;
  } catch (error) {
    console.error("Error during animation:", error.response?.data || error.message);
    throw new Error("Failed to animate model.");
  }
}

// Main function to orchestrate the process
async function generateAndAnimateModel() {
  try {
    // Step 1: Generate a 3D model
    const modelPrompt = "A blue cat"; // Customize your prompt here
    const modelId = await generateModel(modelPrompt, true);
    console.log(`Generated model ID: ${modelId}`);

    // Step 2: Poll for the model's status
    const modelData = await pollStatus("user-generated-model", modelId);
    console.log("Model generation completed:", modelData);

    // Step 3: Prepare the request to animate the model
    const modelName = "A blue cat"; // Name your model
    const modelType = "cat"; // Specify the model type
    const animationOptions = {
      symmetry: "true",
      auto_classify: "false",
      auto_rotate: "true",
      can_use_for_internal_improvements: "true",
      author: "John Doe",
      license: "ccby",
    };

    // Step 4: Animate the model
    const animatedModelId = await animateModel(modelData, modelName, modelType, animationOptions);
    console.log(`Animated Model ID: ${animatedModelId}`);

    // Step 5: Poll for the animated model status
    const animationData = await pollStatus("user-processed-model", animatedModelId, 'done'); // mention stage param according to your usecase (More details in user-processed-model api section) 
    console.log("Animation completed:", animationData);

    console.log("Process completed successfully!");
  } catch (error) {
    console.error("Error during the process:", error.message);
  }
}

Last updated

Was this helpful?