Azure Data Factory: guide with Data Flows Combining Multiple Sources
I'm reviewing some code and I'm prototyping a solution and I'm working on a project and hit a roadblock... I've searched everywhere and can't find a clear answer... I'm currently working on an Azure Data Factory pipeline that needs to combine data from an Azure SQL Database and a Blob Storage. I've set up a data flow where I use a join transformation, but I'm running into a question where I'm getting the behavior message: `The join condition does not match any rows, resulting in an empty output.` My join keys are set up correctly according to the schema, but it seems like the output is always empty despite having valid data in both sources. Here's a snippet of my data flow: 1. **Source1**: Azure SQL Database - Table: Customers - Key: CustomerID 2. **Source2**: Azure Blob Storage - Format: CSV - Key: CustomerID In the join transformation, I've set it up as follows: - Join type: Inner - Condition: `Source1.CustomerID = Source2.CustomerID` I also set the schema for both sources explicitly and verified that the data types for `CustomerID` match. I’ve tried enabling the data preview on both sources, and I can see the expected rows in each dataset. However, the join still results in no output. To troubleshoot, I've verified that there are indeed matching `CustomerID` values across both datasets using sample queries, but the data flow continues to give me an empty result. Could this be related to data type mismatches or something else specific to the configuration? I've also checked the performance settings and ensured that the integration runtime is set up correctly. Any tips on how to diagnose and resolve this scenario would be greatly appreciated! My development environment is Windows 10. I recently upgraded to Json 3.9. Cheers for any assistance! My development environment is CentOS. Thanks for any help you can provide! My development environment is Ubuntu 20.04. For context: I'm using Json on CentOS. I'd really appreciate any guidance on this.