Ƭ Args: Object
Name | Type |
---|---|
model |
string |
Ƭ AudioClassificationArgs: Args
& { data
: any
}
Ƭ AudioClassificationReturn: AudioClassificationReturnValue
[]
Ƭ AudioClassificationReturnValue: Object
Name | Type | Description |
---|---|---|
label |
string |
The label for the class (model specific) |
score |
number |
A float that represents how likely it is that the audio file belongs to this class. |
Ƭ AutomaticSpeechRecognitionArgs: Args
& { data
: any
}
Ƭ AutomaticSpeechRecognitionReturn: Object
Name | Type | Description |
---|---|---|
text |
string |
The text that was recognized from the audio |
Ƭ ConversationalArgs: Args
& { inputs
: { generated_responses?
: string
[] ; past_user_inputs?
: string
[] ; text
: string
} ; parameters?
: { max_length?
: number
; max_time?
: number
; min_length?
: number
; repetition_penalty?
: number
; temperature?
: number
; top_k?
: number
; top_p?
: number
} }
Ƭ ConversationalReturn: Object
Name | Type |
---|---|
conversation |
{ generated_responses : string [] ; past_user_inputs : string [] } |
conversation.generated_responses |
string [] |
conversation.past_user_inputs |
string [] |
generated_text |
string |
warnings |
string [] |
Ƭ FeatureExtractionArgs: Args
& { inputs
: Record
<string
, any
> | Record
<string
, any
>[] }
Ƭ FeatureExtractionReturn: (number
| number
[])[]
Returned values are a list of floats, or a list of list of floats (depending on if you sent a string or a list of string, and if the automatic reduction, usually mean_pooling for instance was applied for you or not. This should be explained on the model’s README.
Ƭ FillMaskArgs: Args
& { inputs
: string
}
Ƭ FillMaskReturn: { score
: number
; sequence
: string
; token
: number
; token_str
: string
}[]
Ƭ ImageClassificationArgs: Args
& { data
: any
}
Ƭ ImageClassificationReturn: ImageClassificationReturnValue
[]
Ƭ ImageClassificationReturnValue: Object
Name | Type | Description |
---|---|---|
label |
string |
A float that represents how likely it is that the image file belongs to this class. |
score |
number |
The label for the class (model specific) |
Ƭ ImageSegmentationArgs: Args
& { data
: any
}
Ƭ ImageSegmentationReturn: ImageSegmentationReturnValue
[]
Ƭ ImageSegmentationReturnValue: Object
Name | Type | Description |
---|---|---|
label |
string |
The label for the class (model specific) of a segment. |
mask |
string |
A str (base64 str of a single channel black-and-white img) representing the mask of a segment. |
score |
number |
A float that represents how likely it is that the detected object belongs to the given class. |
Ƭ ObjectDetectionArgs: Args
& { data
: any
}
Ƭ ObjectDetectionReturn: ObjectDetectionReturnValue
[]
Ƭ ObjectDetectionReturnValue: Object
Name | Type | Description |
---|---|---|
box |
{ xmax : number ; xmin : number ; ymax : number ; ymin : number } |
A dict (with keys [xmin,ymin,xmax,ymax]) representing the bounding box of a detected object. |
box.xmax |
number |
- |
box.xmin |
number |
- |
box.ymax |
number |
- |
box.ymin |
number |
- |
label |
string |
The label for the class (model specific) of a detected object. |
score |
number |
A float that represents how likely it is that the detected object belongs to the given class. |
Ƭ Options: Object
Name | Type | Description |
---|---|---|
retry_on_error? |
boolean |
(Default: true) Boolean. If a request 503s and wait_for_model is set to false, the request will be retried with the same parameters but with wait_for_model set to true. |
use_cache? |
boolean |
(Default: true). Boolean. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query. |
use_gpu? |
boolean |
(Default: false). Boolean to use GPU instead of CPU for inference (requires Startup plan at least). |
wait_for_model? |
boolean |
(Default: false) Boolean. If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places. |
Ƭ QuestionAnswerArgs: Args
& { inputs
: { context
: string
; question
: string
} }
Ƭ QuestionAnswerReturn: Object
Name | Type | Description |
---|---|---|
answer |
string |
A string that’s the answer within the text. |
end |
number |
The index (string wise) of the stop of the answer within context. |
score |
number |
A float that represents how likely that the answer is correct |
start |
number |
The index (string wise) of the start of the answer within context. |
Ƭ SummarizationArgs: Args
& { inputs
: string
; parameters?
: { max_length?
: number
; max_time?
: number
; min_length?
: number
; repetition_penalty?
: number
; temperature?
: number
; top_k?
: number
; top_p?
: number
} }
Ƭ SummarizationReturn: Object
Name | Type | Description |
---|---|---|
summary_text |
string |
The string after translation |
Ƭ TableQuestionAnswerArgs: Args
& { inputs
: { query
: string
; table
: Record
<string
, string
[]> } }
Ƭ TableQuestionAnswerReturn: Object
Name | Type | Description |
---|---|---|
aggregator |
string |
The aggregator used to get the answer |
answer |
string |
The plaintext answer |
cells |
string [] |
A list of coordinates of the cells contents |
coordinates |
number [][] |
a list of coordinates of the cells referenced in the answer |
Ƭ TextClassificationArgs: Args
& { inputs
: string
}
Ƭ TextClassificationReturn: { label
: string
; score
: number
}[]
Ƭ TextGenerationArgs: Args
& { inputs
: string
; parameters?
: { do_sample?
: boolean
; max_new_tokens?
: number
; max_time?
: number
; num_return_sequences?
: number
; repetition_penalty?
: number
; return_full_text?
: boolean
; temperature?
: number
; top_k?
: number
; top_p?
: number
} }
Ƭ TextGenerationReturn: Object
Name | Type | Description |
---|---|---|
generated_text |
string |
The continuated string |
Ƭ TextToImageArgs: Args
& { inputs
: string
; negative_prompt?
: string
}
Ƭ TextToImageReturn: ArrayBuffer
Ƭ TokenClassificationArgs: Args
& { inputs
: string
; parameters?
: { aggregation_strategy?
: "none"
| "simple"
| "first"
| "average"
| "max"
} }
Ƭ TokenClassificationReturn: TokenClassificationReturnValue
[]
Ƭ TokenClassificationReturnValue: Object
Name | Type | Description |
---|---|---|
end |
number |
The offset stringwise where the answer is located. Useful to disambiguate if word occurs multiple times. |
entity_group |
string |
The type for the entity being recognized (model specific). |
score |
number |
How likely the entity was recognized. |
start |
number |
The offset stringwise where the answer is located. Useful to disambiguate if word occurs multiple times. |
word |
string |
The string that was captured |
Ƭ TranslationArgs: Args
& { inputs
: string
}
Ƭ TranslationReturn: Object
Name | Type | Description |
---|---|---|
translation_text |
string |
The string after translation |
Ƭ ZeroShotClassificationArgs: Args
& { inputs
: string
| string
[] ; parameters
: { candidate_labels
: string
[] ; multi_label?
: boolean
} }
Ƭ ZeroShotClassificationReturn: ZeroShotClassificationReturnValue
[]
Ƭ ZeroShotClassificationReturnValue: Object
Name | Type |
---|---|
labels |
string [] |
scores |
number [] |
sequence |
string |