1. Packages
  2. Oracle Cloud Infrastructure
  3. API Docs
  4. AiLanguage
  5. getModels
Oracle Cloud Infrastructure v1.41.0 published on Wednesday, Jun 19, 2024 by Pulumi

oci.AiLanguage.getModels

Explore with Pulumi AI

oci logo
Oracle Cloud Infrastructure v1.41.0 published on Wednesday, Jun 19, 2024 by Pulumi

    This data source provides the list of Models in Oracle Cloud Infrastructure Ai Language service.

    Returns a list of models.

    Example Usage

    Coming soon!
    
    Coming soon!
    
    Coming soon!
    
    Coming soon!
    
    package generated_program;
    
    import com.pulumi.Context;
    import com.pulumi.Pulumi;
    import com.pulumi.core.Output;
    import com.pulumi.oci.AiLanguage.AiLanguageFunctions;
    import com.pulumi.oci.AiLanguage.inputs.GetModelsArgs;
    import java.util.List;
    import java.util.ArrayList;
    import java.util.Map;
    import java.io.File;
    import java.nio.file.Files;
    import java.nio.file.Paths;
    
    public class App {
        public static void main(String[] args) {
            Pulumi.run(App::stack);
        }
    
        public static void stack(Context ctx) {
            final var testModels = AiLanguageFunctions.getModels(GetModelsArgs.builder()
                .compartmentId(compartmentId)
                .displayName(modelDisplayName)
                .modelId(testModel.id())
                .projectId(testProject.id())
                .state(modelState)
                .build());
    
        }
    }
    
    variables:
      testModels:
        fn::invoke:
          Function: oci:AiLanguage:getModels
          Arguments:
            compartmentId: ${compartmentId}
            displayName: ${modelDisplayName}
            modelId: ${testModel.id}
            projectId: ${testProject.id}
            state: ${modelState}
    

    Using getModels

    Two invocation forms are available. The direct form accepts plain arguments and either blocks until the result value is available, or returns a Promise-wrapped result. The output form accepts Input-wrapped arguments and returns an Output-wrapped result.

    function getModels(args: GetModelsArgs, opts?: InvokeOptions): Promise<GetModelsResult>
    function getModelsOutput(args: GetModelsOutputArgs, opts?: InvokeOptions): Output<GetModelsResult>
    def get_models(compartment_id: Optional[str] = None,
                   display_name: Optional[str] = None,
                   filters: Optional[Sequence[_ailanguage.GetModelsFilter]] = None,
                   id: Optional[str] = None,
                   project_id: Optional[str] = None,
                   state: Optional[str] = None,
                   opts: Optional[InvokeOptions] = None) -> GetModelsResult
    def get_models_output(compartment_id: Optional[pulumi.Input[str]] = None,
                   display_name: Optional[pulumi.Input[str]] = None,
                   filters: Optional[pulumi.Input[Sequence[pulumi.Input[_ailanguage.GetModelsFilterArgs]]]] = None,
                   id: Optional[pulumi.Input[str]] = None,
                   project_id: Optional[pulumi.Input[str]] = None,
                   state: Optional[pulumi.Input[str]] = None,
                   opts: Optional[InvokeOptions] = None) -> Output[GetModelsResult]
    func GetModels(ctx *Context, args *GetModelsArgs, opts ...InvokeOption) (*GetModelsResult, error)
    func GetModelsOutput(ctx *Context, args *GetModelsOutputArgs, opts ...InvokeOption) GetModelsResultOutput

    > Note: This function is named GetModels in the Go SDK.

    public static class GetModels 
    {
        public static Task<GetModelsResult> InvokeAsync(GetModelsArgs args, InvokeOptions? opts = null)
        public static Output<GetModelsResult> Invoke(GetModelsInvokeArgs args, InvokeOptions? opts = null)
    }
    public static CompletableFuture<GetModelsResult> getModels(GetModelsArgs args, InvokeOptions options)
    // Output-based functions aren't available in Java yet
    
    fn::invoke:
      function: oci:AiLanguage/getModels:getModels
      arguments:
        # arguments dictionary

    The following arguments are supported:

    CompartmentId string
    The ID of the compartment in which to list resources.
    DisplayName string
    A filter to return only resources that match the entire display name given.
    Filters List<GetModelsFilter>
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    ProjectId string
    The ID of the project for which to list the objects.
    State string
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    CompartmentId string
    The ID of the compartment in which to list resources.
    DisplayName string
    A filter to return only resources that match the entire display name given.
    Filters []GetModelsFilter
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    ProjectId string
    The ID of the project for which to list the objects.
    State string
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    compartmentId String
    The ID of the compartment in which to list resources.
    displayName String
    A filter to return only resources that match the entire display name given.
    filters List<GetModelsFilter>
    id String
    Unique identifier model OCID of a model that is immutable on creation
    projectId String
    The ID of the project for which to list the objects.
    state String
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    compartmentId string
    The ID of the compartment in which to list resources.
    displayName string
    A filter to return only resources that match the entire display name given.
    filters GetModelsFilter[]
    id string
    Unique identifier model OCID of a model that is immutable on creation
    projectId string
    The ID of the project for which to list the objects.
    state string
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    compartment_id str
    The ID of the compartment in which to list resources.
    display_name str
    A filter to return only resources that match the entire display name given.
    filters Sequence[ailanguage.GetModelsFilter]
    id str
    Unique identifier model OCID of a model that is immutable on creation
    project_id str
    The ID of the project for which to list the objects.
    state str
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    compartmentId String
    The ID of the compartment in which to list resources.
    displayName String
    A filter to return only resources that match the entire display name given.
    filters List<Property Map>
    id String
    Unique identifier model OCID of a model that is immutable on creation
    projectId String
    The ID of the project for which to list the objects.
    state String
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.

    getModels Result

    The following output properties are available:

    CompartmentId string
    The OCID for the model's compartment.
    ModelCollections List<GetModelsModelCollection>
    The list of model_collection.
    DisplayName string
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    Filters List<GetModelsFilter>
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    ProjectId string
    The OCID of the project to associate with the model.
    State string
    The state of the model.
    CompartmentId string
    The OCID for the model's compartment.
    ModelCollections []GetModelsModelCollection
    The list of model_collection.
    DisplayName string
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    Filters []GetModelsFilter
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    ProjectId string
    The OCID of the project to associate with the model.
    State string
    The state of the model.
    compartmentId String
    The OCID for the model's compartment.
    modelCollections List<GetModelsModelCollection>
    The list of model_collection.
    displayName String
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    filters List<GetModelsFilter>
    id String
    Unique identifier model OCID of a model that is immutable on creation
    projectId String
    The OCID of the project to associate with the model.
    state String
    The state of the model.
    compartmentId string
    The OCID for the model's compartment.
    modelCollections GetModelsModelCollection[]
    The list of model_collection.
    displayName string
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    filters GetModelsFilter[]
    id string
    Unique identifier model OCID of a model that is immutable on creation
    projectId string
    The OCID of the project to associate with the model.
    state string
    The state of the model.
    compartment_id str
    The OCID for the model's compartment.
    model_collections Sequence[ailanguage.GetModelsModelCollection]
    The list of model_collection.
    display_name str
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    filters Sequence[ailanguage.GetModelsFilter]
    id str
    Unique identifier model OCID of a model that is immutable on creation
    project_id str
    The OCID of the project to associate with the model.
    state str
    The state of the model.
    compartmentId String
    The OCID for the model's compartment.
    modelCollections List<Property Map>
    The list of model_collection.
    displayName String
    A user-friendly display name for the resource. It does not have to be unique and can be modified. Avoid entering confidential information.
    filters List<Property Map>
    id String
    Unique identifier model OCID of a model that is immutable on creation
    projectId String
    The OCID of the project to associate with the model.
    state String
    The state of the model.

    Supporting Types

    GetModelsFilter

    Name string
    Values List<string>
    Regex bool
    Name string
    Values []string
    Regex bool
    name String
    values List<String>
    regex Boolean
    name string
    values string[]
    regex boolean
    name str
    values Sequence[str]
    regex bool
    name String
    values List<String>
    regex Boolean

    GetModelsModelCollection

    GetModelsModelCollectionItem

    CompartmentId string
    The ID of the compartment in which to list resources.
    DefinedTags Dictionary<string, object>
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    Description string
    A short description of the Model.
    DisplayName string
    A filter to return only resources that match the entire display name given.
    EvaluationResults List<GetModelsModelCollectionItemEvaluationResult>
    model training results of different models
    FreeformTags Dictionary<string, object>
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    LifecycleDetails string
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    ModelDetails List<GetModelsModelCollectionItemModelDetail>
    Possible model types
    ProjectId string
    The ID of the project for which to list the objects.
    State string
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    SystemTags Dictionary<string, object>
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    TestStrategies List<GetModelsModelCollectionItemTestStrategy>
    Possible strategy as testing and validation(optional) dataset.
    TimeCreated string
    The time the the model was created. An RFC3339 formatted datetime string.
    TimeUpdated string
    The time the model was updated. An RFC3339 formatted datetime string.
    TrainingDatasets List<GetModelsModelCollectionItemTrainingDataset>
    Possible data set type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    CompartmentId string
    The ID of the compartment in which to list resources.
    DefinedTags map[string]interface{}
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    Description string
    A short description of the Model.
    DisplayName string
    A filter to return only resources that match the entire display name given.
    EvaluationResults []GetModelsModelCollectionItemEvaluationResult
    model training results of different models
    FreeformTags map[string]interface{}
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    Id string
    Unique identifier model OCID of a model that is immutable on creation
    LifecycleDetails string
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    ModelDetails []GetModelsModelCollectionItemModelDetail
    Possible model types
    ProjectId string
    The ID of the project for which to list the objects.
    State string
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    SystemTags map[string]interface{}
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    TestStrategies []GetModelsModelCollectionItemTestStrategy
    Possible strategy as testing and validation(optional) dataset.
    TimeCreated string
    The time the the model was created. An RFC3339 formatted datetime string.
    TimeUpdated string
    The time the model was updated. An RFC3339 formatted datetime string.
    TrainingDatasets []GetModelsModelCollectionItemTrainingDataset
    Possible data set type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartmentId String
    The ID of the compartment in which to list resources.
    definedTags Map<String,Object>
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description String
    A short description of the Model.
    displayName String
    A filter to return only resources that match the entire display name given.
    evaluationResults List<GetModelsModelCollectionItemEvaluationResult>
    model training results of different models
    freeformTags Map<String,Object>
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id String
    Unique identifier model OCID of a model that is immutable on creation
    lifecycleDetails String
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    modelDetails List<GetModelsModelCollectionItemModelDetail>
    Possible model types
    projectId String
    The ID of the project for which to list the objects.
    state String
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    systemTags Map<String,Object>
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    testStrategies List<GetModelsModelCollectionItemTestStrategy>
    Possible strategy as testing and validation(optional) dataset.
    timeCreated String
    The time the the model was created. An RFC3339 formatted datetime string.
    timeUpdated String
    The time the model was updated. An RFC3339 formatted datetime string.
    trainingDatasets List<GetModelsModelCollectionItemTrainingDataset>
    Possible data set type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartmentId string
    The ID of the compartment in which to list resources.
    definedTags {[key: string]: any}
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description string
    A short description of the Model.
    displayName string
    A filter to return only resources that match the entire display name given.
    evaluationResults GetModelsModelCollectionItemEvaluationResult[]
    model training results of different models
    freeformTags {[key: string]: any}
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id string
    Unique identifier model OCID of a model that is immutable on creation
    lifecycleDetails string
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    modelDetails GetModelsModelCollectionItemModelDetail[]
    Possible model types
    projectId string
    The ID of the project for which to list the objects.
    state string
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    systemTags {[key: string]: any}
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    testStrategies GetModelsModelCollectionItemTestStrategy[]
    Possible strategy as testing and validation(optional) dataset.
    timeCreated string
    The time the the model was created. An RFC3339 formatted datetime string.
    timeUpdated string
    The time the model was updated. An RFC3339 formatted datetime string.
    trainingDatasets GetModelsModelCollectionItemTrainingDataset[]
    Possible data set type
    version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartment_id str
    The ID of the compartment in which to list resources.
    defined_tags Mapping[str, Any]
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description str
    A short description of the Model.
    display_name str
    A filter to return only resources that match the entire display name given.
    evaluation_results Sequence[ailanguage.GetModelsModelCollectionItemEvaluationResult]
    model training results of different models
    freeform_tags Mapping[str, Any]
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id str
    Unique identifier model OCID of a model that is immutable on creation
    lifecycle_details str
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    model_details Sequence[ailanguage.GetModelsModelCollectionItemModelDetail]
    Possible model types
    project_id str
    The ID of the project for which to list the objects.
    state str
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    system_tags Mapping[str, Any]
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    test_strategies Sequence[ailanguage.GetModelsModelCollectionItemTestStrategy]
    Possible strategy as testing and validation(optional) dataset.
    time_created str
    The time the the model was created. An RFC3339 formatted datetime string.
    time_updated str
    The time the model was updated. An RFC3339 formatted datetime string.
    training_datasets Sequence[ailanguage.GetModelsModelCollectionItemTrainingDataset]
    Possible data set type
    version str
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    compartmentId String
    The ID of the compartment in which to list resources.
    definedTags Map<Any>
    Defined tags for this resource. Each key is predefined and scoped to a namespace. Example: {"foo-namespace.bar-key": "value"}
    description String
    A short description of the Model.
    displayName String
    A filter to return only resources that match the entire display name given.
    evaluationResults List<Property Map>
    model training results of different models
    freeformTags Map<Any>
    Simple key-value pair that is applied without any predefined name, type or scope. Exists for cross-compatibility only. Example: {"bar-key": "value"}
    id String
    Unique identifier model OCID of a model that is immutable on creation
    lifecycleDetails String
    A message describing the current state in more detail. For example, can be used to provide actionable information for a resource in failed state.
    modelDetails List<Property Map>
    Possible model types
    projectId String
    The ID of the project for which to list the objects.
    state String
    Filter results by the specified lifecycle state. Must be a valid state for the resource type.
    systemTags Map<Any>
    Usage of system tag keys. These predefined keys are scoped to namespaces. Example: {"orcl-cloud.free-tier-retained": "true"}
    testStrategies List<Property Map>
    Possible strategy as testing and validation(optional) dataset.
    timeCreated String
    The time the the model was created. An RFC3339 formatted datetime string.
    timeUpdated String
    The time the model was updated. An RFC3339 formatted datetime string.
    trainingDatasets List<Property Map>
    Possible data set type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0

    GetModelsModelCollectionItemEvaluationResult

    ClassMetrics List<GetModelsModelCollectionItemEvaluationResultClassMetric>
    List of text classification metrics
    ConfusionMatrix string
    class level confusion matrix
    EntityMetrics List<GetModelsModelCollectionItemEvaluationResultEntityMetric>
    List of entity metrics
    Labels List<string>
    labels
    Metrics List<GetModelsModelCollectionItemEvaluationResultMetric>
    Model level named entity recognition metrics
    ModelType string
    Model type
    ClassMetrics []GetModelsModelCollectionItemEvaluationResultClassMetric
    List of text classification metrics
    ConfusionMatrix string
    class level confusion matrix
    EntityMetrics []GetModelsModelCollectionItemEvaluationResultEntityMetric
    List of entity metrics
    Labels []string
    labels
    Metrics []GetModelsModelCollectionItemEvaluationResultMetric
    Model level named entity recognition metrics
    ModelType string
    Model type
    classMetrics List<GetModelsModelCollectionItemEvaluationResultClassMetric>
    List of text classification metrics
    confusionMatrix String
    class level confusion matrix
    entityMetrics List<GetModelsModelCollectionItemEvaluationResultEntityMetric>
    List of entity metrics
    labels List<String>
    labels
    metrics List<GetModelsModelCollectionItemEvaluationResultMetric>
    Model level named entity recognition metrics
    modelType String
    Model type
    classMetrics GetModelsModelCollectionItemEvaluationResultClassMetric[]
    List of text classification metrics
    confusionMatrix string
    class level confusion matrix
    entityMetrics GetModelsModelCollectionItemEvaluationResultEntityMetric[]
    List of entity metrics
    labels string[]
    labels
    metrics GetModelsModelCollectionItemEvaluationResultMetric[]
    Model level named entity recognition metrics
    modelType string
    Model type
    classMetrics List<Property Map>
    List of text classification metrics
    confusionMatrix String
    class level confusion matrix
    entityMetrics List<Property Map>
    List of entity metrics
    labels List<String>
    labels
    metrics List<Property Map>
    Model level named entity recognition metrics
    modelType String
    Model type

    GetModelsModelCollectionItemEvaluationResultClassMetric

    F1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    Support double
    number of samples in the test set
    F1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    Support float64
    number of samples in the test set
    f1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support Double
    number of samples in the test set
    f1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    label string
    Entity label
    precision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support number
    number of samples in the test set
    f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    label str
    Entity label
    precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support float
    number of samples in the test set
    f1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    support Number
    number of samples in the test set

    GetModelsModelCollectionItemEvaluationResultEntityMetric

    F1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    F1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    Label string
    Entity label
    Precision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    Recall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    label string
    Entity label
    precision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    label str
    Entity label
    precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    f1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    label String
    Entity label
    precision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    recall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.

    GetModelsModelCollectionItemEvaluationResultMetric

    Accuracy double
    The fraction of the labels that were correctly recognised .
    MacroF1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    MacroPrecision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MacroRecall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    MicroF1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    MicroPrecision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MicroRecall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    WeightedF1 double
    F1-score, is a measure of a model’s accuracy on a dataset
    WeightedPrecision double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    WeightedRecall double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    Accuracy float64
    The fraction of the labels that were correctly recognised .
    MacroF1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    MacroPrecision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MacroRecall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    MicroF1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    MicroPrecision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    MicroRecall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    WeightedF1 float64
    F1-score, is a measure of a model’s accuracy on a dataset
    WeightedPrecision float64
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    WeightedRecall float64
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy Double
    The fraction of the labels that were correctly recognised .
    macroF1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    macroPrecision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macroRecall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    microF1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    microPrecision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    microRecall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weightedF1 Double
    F1-score, is a measure of a model’s accuracy on a dataset
    weightedPrecision Double
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weightedRecall Double
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy number
    The fraction of the labels that were correctly recognised .
    macroF1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    macroPrecision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macroRecall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    microF1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    microPrecision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    microRecall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weightedF1 number
    F1-score, is a measure of a model’s accuracy on a dataset
    weightedPrecision number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weightedRecall number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy float
    The fraction of the labels that were correctly recognised .
    macro_f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    macro_precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macro_recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    micro_f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    micro_precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    micro_recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weighted_f1 float
    F1-score, is a measure of a model’s accuracy on a dataset
    weighted_precision float
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weighted_recall float
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    accuracy Number
    The fraction of the labels that were correctly recognised .
    macroF1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    macroPrecision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    macroRecall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    microF1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    microPrecision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    microRecall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.
    weightedF1 Number
    F1-score, is a measure of a model’s accuracy on a dataset
    weightedPrecision Number
    Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives)
    weightedRecall Number
    Measures the model's ability to predict actual positive classes. It is the ratio between the predicted true positives and what was actually tagged. The recall metric reveals how many of the predicted classes are correct.

    GetModelsModelCollectionItemModelDetail

    ClassificationModes List<GetModelsModelCollectionItemModelDetailClassificationMode>
    classification Modes
    LanguageCode string
    supported language default value is en
    ModelType string
    Model type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    ClassificationModes []GetModelsModelCollectionItemModelDetailClassificationMode
    classification Modes
    LanguageCode string
    supported language default value is en
    ModelType string
    Model type
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationModes List<GetModelsModelCollectionItemModelDetailClassificationMode>
    classification Modes
    languageCode String
    supported language default value is en
    modelType String
    Model type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationModes GetModelsModelCollectionItemModelDetailClassificationMode[]
    classification Modes
    languageCode string
    supported language default value is en
    modelType string
    Model type
    version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classification_modes Sequence[ailanguage.GetModelsModelCollectionItemModelDetailClassificationMode]
    classification Modes
    language_code str
    supported language default value is en
    model_type str
    Model type
    version str
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationModes List<Property Map>
    classification Modes
    languageCode String
    supported language default value is en
    modelType String
    Model type
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0

    GetModelsModelCollectionItemModelDetailClassificationMode

    ClassificationMode string
    classification Modes
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    ClassificationMode string
    classification Modes
    Version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationMode String
    classification Modes
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationMode string
    classification Modes
    version string
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classification_mode str
    classification Modes
    version str
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0
    classificationMode String
    classification Modes
    version String
    For pre trained models this will identify model type version used for model creation For custom identifying the model by model id is difficult. This param provides ease of use for end customer. <>::<>_<>::<> ex: ai-lang::NER_V1::CUSTOM-V0

    GetModelsModelCollectionItemTestStrategy

    StrategyType string
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    TestingDatasets List<GetModelsModelCollectionItemTestStrategyTestingDataset>
    Possible data set type
    ValidationDatasets List<GetModelsModelCollectionItemTestStrategyValidationDataset>
    Possible data set type
    StrategyType string
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    TestingDatasets []GetModelsModelCollectionItemTestStrategyTestingDataset
    Possible data set type
    ValidationDatasets []GetModelsModelCollectionItemTestStrategyValidationDataset
    Possible data set type
    strategyType String
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testingDatasets List<GetModelsModelCollectionItemTestStrategyTestingDataset>
    Possible data set type
    validationDatasets List<GetModelsModelCollectionItemTestStrategyValidationDataset>
    Possible data set type
    strategyType string
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testingDatasets GetModelsModelCollectionItemTestStrategyTestingDataset[]
    Possible data set type
    validationDatasets GetModelsModelCollectionItemTestStrategyValidationDataset[]
    Possible data set type
    strategy_type str
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testing_datasets Sequence[ailanguage.GetModelsModelCollectionItemTestStrategyTestingDataset]
    Possible data set type
    validation_datasets Sequence[ailanguage.GetModelsModelCollectionItemTestStrategyValidationDataset]
    Possible data set type
    strategyType String
    This information will define the test strategy different datasets for test and validation(optional) dataset.
    testingDatasets List<Property Map>
    Possible data set type
    validationDatasets List<Property Map>
    Possible data set type

    GetModelsModelCollectionItemTestStrategyTestingDataset

    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails List<GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail>
    Possible object storage location types
    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails []GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail>
    Possible object storage location types
    datasetId string
    Data Science Labelling Service OCID
    datasetType string
    Possible data sets
    locationDetails GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail[]
    Possible object storage location types
    dataset_id str
    Data Science Labelling Service OCID
    dataset_type str
    Possible data sets
    location_details Sequence[ailanguage.GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail]
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<Property Map>
    Possible object storage location types

    GetModelsModelCollectionItemTestStrategyTestingDatasetLocationDetail

    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames List<string>
    Array of files which need to be processed in the bucket
    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames []string
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket
    bucket string
    Object storage bucket name
    locationType string
    Possible object storage location types
    namespace string
    Object storage namespace
    objectNames string[]
    Array of files which need to be processed in the bucket
    bucket str
    Object storage bucket name
    location_type str
    Possible object storage location types
    namespace str
    Object storage namespace
    object_names Sequence[str]
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket

    GetModelsModelCollectionItemTestStrategyValidationDataset

    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails List<GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail>
    Possible object storage location types
    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails []GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail>
    Possible object storage location types
    datasetId string
    Data Science Labelling Service OCID
    datasetType string
    Possible data sets
    locationDetails GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail[]
    Possible object storage location types
    dataset_id str
    Data Science Labelling Service OCID
    dataset_type str
    Possible data sets
    location_details Sequence[ailanguage.GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail]
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<Property Map>
    Possible object storage location types

    GetModelsModelCollectionItemTestStrategyValidationDatasetLocationDetail

    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames List<string>
    Array of files which need to be processed in the bucket
    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames []string
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket
    bucket string
    Object storage bucket name
    locationType string
    Possible object storage location types
    namespace string
    Object storage namespace
    objectNames string[]
    Array of files which need to be processed in the bucket
    bucket str
    Object storage bucket name
    location_type str
    Possible object storage location types
    namespace str
    Object storage namespace
    object_names Sequence[str]
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket

    GetModelsModelCollectionItemTrainingDataset

    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails List<GetModelsModelCollectionItemTrainingDatasetLocationDetail>
    Possible object storage location types
    DatasetId string
    Data Science Labelling Service OCID
    DatasetType string
    Possible data sets
    LocationDetails []GetModelsModelCollectionItemTrainingDatasetLocationDetail
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<GetModelsModelCollectionItemTrainingDatasetLocationDetail>
    Possible object storage location types
    datasetId string
    Data Science Labelling Service OCID
    datasetType string
    Possible data sets
    locationDetails GetModelsModelCollectionItemTrainingDatasetLocationDetail[]
    Possible object storage location types
    dataset_id str
    Data Science Labelling Service OCID
    dataset_type str
    Possible data sets
    location_details Sequence[ailanguage.GetModelsModelCollectionItemTrainingDatasetLocationDetail]
    Possible object storage location types
    datasetId String
    Data Science Labelling Service OCID
    datasetType String
    Possible data sets
    locationDetails List<Property Map>
    Possible object storage location types

    GetModelsModelCollectionItemTrainingDatasetLocationDetail

    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames List<string>
    Array of files which need to be processed in the bucket
    Bucket string
    Object storage bucket name
    LocationType string
    Possible object storage location types
    Namespace string
    Object storage namespace
    ObjectNames []string
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket
    bucket string
    Object storage bucket name
    locationType string
    Possible object storage location types
    namespace string
    Object storage namespace
    objectNames string[]
    Array of files which need to be processed in the bucket
    bucket str
    Object storage bucket name
    location_type str
    Possible object storage location types
    namespace str
    Object storage namespace
    object_names Sequence[str]
    Array of files which need to be processed in the bucket
    bucket String
    Object storage bucket name
    locationType String
    Possible object storage location types
    namespace String
    Object storage namespace
    objectNames List<String>
    Array of files which need to be processed in the bucket

    Package Details

    Repository
    oci pulumi/pulumi-oci
    License
    Apache-2.0
    Notes
    This Pulumi package is based on the oci Terraform Provider.
    oci logo
    Oracle Cloud Infrastructure v1.41.0 published on Wednesday, Jun 19, 2024 by Pulumi