GCP Dataflow job scenarios with 'java.lang.IllegalArgumentException: Invalid table reference' when using BigQueryIO
I'm sure I'm missing something obvious here, but I'm reviewing some code and Can someone help me understand I'm currently working on a Dataflow job that reads data from a Pub/Sub topic and writes it to a BigQuery table. However, I'm running into an scenario where the job fails with the behavior message `java.lang.IllegalArgumentException: Invalid table reference` during the execution phase. The specific line where the behavior occurs is when I define the BigQueryIO write step. Hereβs the relevant snippet of my pipeline code: ```java PCollection<TableRow> pubSubData = pipeline.apply("Read from PubSub", PubsubIO.read().topic("projects/my-project/topics/my-topic")); pubSubData.apply("Transform to TableRow", ParDo.of(new MyTransform())) .apply("Write to BigQuery", BigQueryIO.writeTableRows() .to("my-project:dataset.table_name") .withCreateDisposition(BigQueryIO.Write.CreateDisposition.CREATE_IF_NEEDED) .withWriteDisposition(BigQueryIO.Write.WriteDisposition.WRITE_APPEND)); ``` I double-checked the table name and dataset in the `to` method, and they seem correct. I also verified that the dataset exists and that I have appropriate permissions to write to it. I am using the Dataflow SDK version 2.35.0 and the Java version is 11. What could be causing this scenario? Is there a specific format or naming convention I need to follow for the table reference? Additionally, I've tried logging the output of the transformation to ensure that the `TableRow` objects are being generated correctly but that didnβt reveal any anomalies. Any insights would be appreciated. The stack includes Java and several other technologies. Is this even possible? Any feedback is welcome! How would you solve this?