You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/integrations/destinations/redshift.md
+3-2
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ The Airbyte Redshift destination allows you to sync data to Redshift.
6
6
7
7
This Redshift destination connector has two replication strategies:
8
8
9
-
1. INSERT: Replicates data via SQL INSERT queries. This is built on top of the destination-jdbc code base and is configured to rely on JDBC 4.2 standard drivers provided by Amazon via Mulesoft [here](https://mvnrepository.com/artifact/com.amazon.redshift/redshift-jdbc42) as described in Redshift documentation [here](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-install.html). **Not recommended for production workloads as this does not scale well**.
9
+
1. INSERT: Replicates data via SQL INSERT queries. This is built on top of the destination-jdbc code base and is configured to rely on JDBC 4.2 standard drivers provided by Amazon via Mulesoft [here](https://mvnrepository.com/artifact/com.amazon.redshift/redshift-jdbc42) as described in Redshift documentation [here](https://docs.aws.amazon.com/redshift/latest/mgmt/jdbc20-install.html). **Not recommended for production workloads as this does not scale well**.
10
10
2. COPY: Replicates data by first uploading data to an S3 bucket and issuing a COPY command. This is the recommended loading approach described by Redshift [best practices](https://docs.aws.amazon.com/redshift/latest/dg/c_loading-data-best-practices.html). Requires an S3 bucket and credentials.
11
11
12
12
Airbyte automatically picks an approach depending on the given configuration - if S3 configuration is present, Airbyte will use the COPY strategy and vice versa.
@@ -79,7 +79,7 @@ Provide the required S3 info.
79
79
* Place the S3 bucket and the Redshift cluster in the same region to save on networking costs.
80
80
***Access Key Id**
81
81
* See [this](https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html#access-keys-and-secret-access-keys) on how to generate an access key.
82
-
* We recommend creating an Airbyte-specific user. This user will require [read and write permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html) to objects in the staging bucket.
82
+
* We recommend creating an Airbyte-specific user. This user will require [read and write permissions](https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_examples_s3_rw-bucket.html) to objects in the staging bucket.
83
83
***Secret Access Key**
84
84
* Corresponding key to the above key id.
85
85
***Part Size**
@@ -118,6 +118,7 @@ All Redshift connections are encrypted using SSL
118
118
119
119
| Version | Date | Pull Request | Subject |
120
120
| :------ | :-------- | :----- | :------ |
121
+
| 0.3.21 | 2021-12-10 |[#8562](https://github.com/airbytehq/airbyte/pull/8562)| Moving classes around for better dependency management |
121
122
| 0.3.20 | 2021-11-08 |[#7719](https://github.com/airbytehq/airbyte/pull/7719)| Improve handling of wide rows by buffering records based on their byte size rather than their count |
122
123
| 0.3.19 | 2021-10-21 |[7234](https://github.com/airbytehq/airbyte/pull/7234)| Allow SSL traffic only |
123
124
| 0.3.17 | 2021-10-12 |[6965](https://github.com/airbytehq/airbyte/pull/6965)| Added SSL Support |
Copy file name to clipboardexpand all lines: docs/integrations/destinations/s3.md
+2-1
Original file line number
Diff line number
Diff line change
@@ -223,7 +223,8 @@ Under the hood, an Airbyte data stream in Json schema is first converted to an A
223
223
224
224
| Version | Date | Pull Request | Subject |
225
225
| :--- | :--- | :--- | :--- |
226
-
| 0.1.15 | 2021-12-03 |[\#9999](https://github.com/airbytehq/airbyte/pull/9999)| Remove excessive logging for Avro and Parquet invalid date strings. |
226
+
| 0.1.16 | 2021-12-10 |[\#8562](https://github.com/airbytehq/airbyte/pull/8562)| Swap dependencies with destination-jdbc. |
227
+
| 0.1.15 | 2021-12-03 |[\#8501](https://github.com/airbytehq/airbyte/pull/8501)| Remove excessive logging for Avro and Parquet invalid date strings. |
227
228
| 0.1.14 | 2021-11-09 |[\#7732](https://github.com/airbytehq/airbyte/pull/7732)| Support timestamp in Avro and Parquet |
228
229
| 0.1.13 | 2021-11-03 |[\#7288](https://github.com/airbytehq/airbyte/issues/7288)| Support Json `additionalProperties`. |
229
230
| 0.1.12 | 2021-09-13 |[\#5720](https://github.com/airbytehq/airbyte/issues/5720)| Added configurable block size for stream. Each stream is limited to 10,000 by S3 |
Copy file name to clipboardexpand all lines: docs/integrations/destinations/snowflake.md
+2-1
Original file line number
Diff line number
Diff line change
@@ -162,7 +162,7 @@ First you will need to create a GCS bucket.
162
162
163
163
Then you will need to run the script below:
164
164
165
-
* You must run the script as the account admin for Snowflake.
165
+
* You must run the script as the account admin for Snowflake.
166
166
* You should replace `AIRBYTE_ROLE` with the role you used for Airbyte's Snowflake configuration.
167
167
* Replace `YOURBUCKETNAME` with your bucket name
168
168
* The stage name can be modified to any valid name.
@@ -194,6 +194,7 @@ Finally, you need to add read/write permissions to your bucket with that email.
194
194
195
195
| Version | Date | Pull Request | Subject |
196
196
| :------ | :-------- | :----- | :------ |
197
+
| 0.3.20 | 2021-12-10 |[#8562](https://github.com/airbytehq/airbyte/pull/8562)| Moving classes around for better dependency management; compatibility fix for Java 17 |
197
198
| 0.3.19 | 2021-12-06 |[#8528](https://github.com/airbytehq/airbyte/pull/8528)| Set Internal Staging as default choice |
198
199
| 0.3.18 | 2021-11-26 |[#8253](https://github.com/airbytehq/airbyte/pull/8253)| Snowflake Internal Staging Support |
199
200
| 0.3.17 | 2021-11-08 |[#7719](https://github.com/airbytehq/airbyte/pull/7719)| Improve handling of wide rows by buffering records based on their byte size rather than their count |
0 commit comments