AWS Glue job scenarios with 'InvalidParameterValueException' for Spark job using AWS Glue 2.0
I'm converting an old project and I'm currently working with an scenario while trying to run an AWS Glue job with AWS Glue version 2.0... The job is supposed to read data from an S3 bucket, process it using Apache Spark, and write the output to another S3 location. However, I keep working with the following behavior message: `InvalidParameterValueException: The provided value for the 'JobCommand' parameter is invalid.` I've ensured that the script path in the job definition is correctly pointing to my Python script in S3, but it still fails. Hereβs the configuration Iβm using: ```json { "Name": "my-glue-job", "Role": "AWSGlueServiceRole", "Command": { "Name": "glueetl", "ScriptLocation": "s3://my-bucket/scripts/my_script.py" }, "DefaultArguments": { "--TempDir": "s3://my-bucket/temp/", "--additional-python-modules": "pandas==1.1.5" }, "GlueVersion": "2.0" } ``` I've tried checking the IAM role permissions, and everything seems fine. The role has full access to both Glue and the S3 bucket. I also confirmed that the script exists in the specified S3 path. To troubleshoot, I tested the script locally with PySpark and it runs without issues. I even added logging to the script to see if it might be failing silently before reaching the Glue context. I also attempted to run the job with a simple 'Hello World' Spark script, but the same behavior continues. Any insights on what might be wrong with the job configuration or the parameters being used? I noticed that sometimes AWS Glue can be finicky with version compatibility, could that be affecting it in this case?