DATADA.AI
Online v0.9.0
dialects 17 records
Home
name display_name description version_support export_instructions type_mappings features
AVRO Avro Apache Avro data serialization format with schema evolution Apache Avro 1.11+ Apache Avro is a row-oriented data serialization format: - Schema-based with JSON schema definitions... {"INTEGER": "int", "BIGINT": "long", "SMALLINT": "int", "TINYINT": "int", "DECIMAL": "bytes", "FLOAT... {"schemas": true, "auto_increment": false, "arrays": true, "json": true, "uuid": true, "enums": true...
ORACLE Oracle Database Oracle's enterprise-grade relational database system with advanced features 12c+ To export your Oracle schema for import into Datada, use SQL Developer, Data Pump, or SQL*Plus: # U... {"INTEGER": "NUMBER(10)", "BIGINT": "NUMBER(19)", "SMALLINT": "NUMBER(5)", "DECIMAL": "NUMBER({preci... {"schemas": true, "auto_increment": true, "arrays": false, "json": true, "uuid": false, "enums": fal...
PARQUET Parquet Apache Parquet columnar storage format for efficient analytics Apache Parquet 2.x Apache Parquet is a columnar storage format optimized for analytics: - Efficient compression and enc... {"INTEGER": "INT32", "BIGINT": "INT64", "SMALLINT": "INT32", "TINYINT": "INT32", "DECIMAL": "DECIMAL... {"schemas": true, "auto_increment": false, "arrays": true, "json": true, "uuid": true, "enums": true...
JSON JSON JSON and JSON Schema format for data interchange and API payloads JSON Schema Draft 7, Draft 2020-12 JSON format is used for: - REST API request/response payloads - Configuration files - Data interchan... {"INTEGER": "integer", "BIGINT": "integer", "SMALLINT": "integer", "TINYINT": "integer", "DECIMAL": ... {"schemas": false, "auto_increment": false, "arrays": true, "json": true, "uuid": false, "enums": tr...
POSTGRESQL PostgreSQL Open-source object-relational database system known for reliability, feature robustness, and perform... 9.6+ To export your PostgreSQL schema for import into Datada, use pg_dump: # Export schema only (no data... {"INTEGER": "INTEGER", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "DECIMAL": "NUMERIC({precision},{... {"schemas": true, "sequences": true, "auto_increment": true, "arrays": true, "json": true, "jsonb": ...
EXCEL Excel Microsoft Excel spreadsheet format (XLSX/XLS) Excel 2007+ (XLSX), Excel 97-2003 (XLS) Microsoft Excel is a spreadsheet format for tabular data: - Supports multiple sheets within a workbo... {"INTEGER": "number", "BIGINT": "number", "SMALLINT": "number", "TINYINT": "number", "DECIMAL": "num... {"schemas": false, "auto_increment": false, "arrays": false, "json": false, "uuid": false, "enums": ...
CSV CSV Comma-Separated Values format for tabular data interchange RFC 4180 CSV format is a simple text-based format for tabular data: - First row typically contains column hea... {"INTEGER": "integer", "BIGINT": "integer", "SMALLINT": "integer", "TINYINT": "integer", "DECIMAL": ... {"schemas": false, "auto_increment": false, "arrays": false, "json": false, "uuid": false, "enums": ...
TERADATA Teradata Enterprise data warehouse platform with advanced analytics and parallel processing Teradata 16.x, 17.x To export your Teradata schema for import into Datada: # Using BTEQ .LOGON your_server/your_user,yo... {"INTEGER": "INTEGER", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "TINYINT": "BYTEINT", "DECIMAL": ... {"schemas": true, "databases": true, "auto_increment": true, "identity_columns": true, "arrays": fal...
SNOWFLAKE Snowflake Cloud-native data warehouse with elastic scaling and zero-maintenance All versions To export your Snowflake schema for import into Datada, use SnowSQL or the web interface: # Using S... {"INTEGER": "NUMBER(38,0)", "BIGINT": "NUMBER(38,0)", "SMALLINT": "NUMBER(38,0)", "DECIMAL": "NUMBER... {"schemas": true, "auto_increment": true, "arrays": true, "json": true, "uuid": false, "enums": fals...
BIGQUERY BigQuery Google's serverless, highly scalable data warehouse with built-in ML capabilities All versions To export your BigQuery schema for import into Datada, use the bq command-line tool or Cloud Console... {"INTEGER": "INT64", "BIGINT": "INT64", "SMALLINT": "INT64", "DECIMAL": "NUMERIC({precision},{scale}... {"schemas": true, "auto_increment": false, "arrays": true, "json": true, "uuid": false, "enums": fal...
SQLITE SQLite Lightweight embedded database ideal for local storage, mobile apps, and development 3.0+ To export your SQLite schema for import into Datada, use the sqlite3 command-line tool: # Export sc... {"INTEGER": "INTEGER", "BIGINT": "INTEGER", "SMALLINT": "INTEGER", "DECIMAL": "REAL", "FLOAT": "REAL... {"schemas": false, "auto_increment": true, "arrays": false, "json": true, "uuid": false, "enums": fa...
DELTALAKE Delta Lake Delta Lake open-source storage layer for data lakes with ACID transactions Delta Lake 2.x, 3.x Delta Lake is an open-source storage layer for data lakehouse: - ACID transactions on data lakes - S... {"INTEGER": "INT", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "TINYINT": "TINYINT", "DECIMAL": "DEC... {"schemas": true, "auto_increment": false, "arrays": true, "json": true, "uuid": false, "enums": fal...
MYSQL MySQL Popular open-source relational database known for its ease of use and web application support 5.7+ To export your MySQL schema for import into Datada, use mysqldump: # Export schema only (no data) m... {"INTEGER": "INT", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "DECIMAL": "DECIMAL({precision},{scal... {"schemas": false, "auto_increment": true, "arrays": false, "json": true, "uuid": false, "enums": tr...
DUCKDB DuckDB In-process analytical database with columnar storage and vectorized execution 0.8+ To export your DuckDB schema for import into Datada: # Using DuckDB CLI duckdb your_database.duckdb... {"INTEGER": "INTEGER", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "TINYINT": "TINYINT", "DECIMAL": ... {"schemas": true, "auto_increment": false, "sequences": true, "arrays": true, "json": true, "uuid": ...
ORC ORC Apache ORC (Optimized Row Columnar) storage format for Hadoop Apache ORC 1.8+ Apache ORC is a columnar storage format optimized for Hadoop: - High compression with predicate push... {"INTEGER": "int", "BIGINT": "bigint", "SMALLINT": "smallint", "TINYINT": "tinyint", "DECIMAL": "dec... {"schemas": true, "auto_increment": false, "arrays": true, "json": true, "uuid": false, "enums": fal...
DATABRICKS Databricks Unified analytics platform built on Apache Spark with Delta Lake support All versions To export your Databricks schema for import into Datada, use SQL or the Databricks CLI: # Using Dat... {"INTEGER": "INT", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "DECIMAL": "DECIMAL({precision},{scal... {"schemas": true, "auto_increment": false, "arrays": true, "json": true, "uuid": false, "enums": fal...
SQLSERVER SQL Server Microsoft's enterprise relational database management system 2016+ To export your SQL Server schema for import into Datada, use SQL Server Management Studio (SSMS) or ... {"INTEGER": "INT", "BIGINT": "BIGINT", "SMALLINT": "SMALLINT", "DECIMAL": "DECIMAL({precision},{scal... {"schemas": true, "auto_increment": true, "arrays": false, "json": true, "uuid": true, "enums": fals...