본문으로 건너뛰기
버전: 1.3.x

프로젝트 내보내기

Project export deals with how to synthesize new images by sending api requests in JSON format.


1. API endpoint

https://aistudios.com/api/odin/editor/project

2. Request parameter

keydesctyperequireddefault
scenesInformation about the sceneArray(json)true-
scenes[].AIModelModel clipJsontrue-
scenes[].AIModel.modelModel's unique key value Learn moreStringtrue-
scenes[].AIModel.clothesThe unique key value of the outfit Learn moreStringtrue-
scenes[].AIModel.languageThe language the script is written in.Stringtrue-
scenes[].AIModel.scriptText for AI to read. It must match the language of the model.Stringtrue-
scenes[].AIModel.scaleThe size of the model. The default value is 1, and if it is set to 2, the size can be changed to 2 times, and if it is set to 0.5, the size can be changed to 0.5 times.Floatfalse1
scenes[].AIModel.layerLayer. The higher the Layer value is, the more exposed it is over other clips.Intfalse300
scenes[].AIModel.locationXIf the X coordinate of the model is 0, the midpoint of the model is in the center of the image. The midpoint of the 0.5 plane model is at the right end of the image. The midpoint of the 0.5 page model is at the left end of the video.)Floattrue-
scenes[].AIModel.locationYIf the model's Y coordinate is 0, the midpoint of the model is in the center of the image. The midpoint of the 0.5 plane model is at the bottom of the image. The midpoint of the -0.5 model is at the top of the video.)Floattrue-
scenes[].clipsFields to add clips such as text, images, and background images.Array(json)false[]
scenes[].clips[].typeTypes of clips. Use image for image, background for background image, and text for text.String enum(image, background, text)true-
scenes[].clips[].detailInformation about the clipJsontrue-
scenes[].clips[].detail.urlImage path. Use when type is image or background.Stringtrue-
scenes[].clips[].detail.scaleImage or text size. Use when type is image or text. In the case of an image, if the size value is 1, the longer of the width and length accounts for 50% of the image. With the same principle, 1.5 sides account for 75% and 2 sides 100%. In the case of text, it is difficult to determine the exact size because each font has a different size.Floatfalse1
scenes[].clips[].detail.locationXThe X coordinates of the clip. If it is 0, the midpoint of the clip is in the center of the image. The midpoint of the 0.5 sided clip is at the right end of the image. The midpoint of the -0.5 page clip is at the left end of the image.Floatfalse0
scenes[].clips[].detail.locationYThe Y coordinate of the clip. If it is 0, the midpoint of the clip is in the center of the image. The midpoint of the 0.5-sided clip is at the bottom of the image. The midpoint of the -0.5 page clip is at the upper end of the image.Floatfalse0
scenes[].clips[].detail.layerLayer. The higher the Layer value is, the more exposed it is over other clips.Intfalse500
scenes[].clips[].detail.textSourceIf it is a text clip, insert the content of the text.Stringfalse-
scenes[].clips[].detail.fontText font properties.StringfalseNoto Sans

3. Response parameters

keydesctype
successProgress on Request SuccessBoolean
cube_usedNumber of Cubes usedInt
keyProject IDString

4. Sample Request

curl https://aistudios.com/api/odin/editor/project  \
-H "Authorization: ${API KEY}" \
-H "Content-Type: application/json" \
-X POST \
-d '{
"scenes":
[{
"AIModel": {
"script": "Sample script for exporting project.",
"model": "M000004017",
"clothes": "BG00006160",
"locationX": -0.28,
"locationY": 0.19,
"scale": 1
},
"clips": [
{
"type": "background",
"detail": {
"url": "https://cdn.aistudios.com/images/news/aiplatform_background_gradient.png"
}
},
{
"type": "image",
"detail": {
"url": "https://cdn.aistudios.com/images/news/aiplatform_background_space.png",
"locationX": 0.3,
"locationY": -0.3,
"scale": 1
}
},
{
"type": "text",
"detail": {
"textSource": "hello!",
"locationX": -0.2,
"locationY": -0.2,
"scale": 1.2,
"font": "SourceSerifPro-Regular"
}
}
]
}]
}'