Trouble with conditional execution in a Bash script for data integration project
I'm performance testing and I've been researching this but I recently switched to I tried several approaches but none seem to work. I've been banging my head against this for hours. Trying to implement a Bash script that integrates data from two different APIs, I've run into a snag with conditional execution of commands based on API responses. The project requires that if the first API returns a success status, I proceed to query the second one. If it fails, I want to log an error message and exit the script gracefully. Hereโs a snippet of what I have so far: ```bash #!/bin/bash response1=$(curl -s -w '%{http_code}' -o /dev/null 'https://api.first.com/data') if [ $response1 -eq 200 ]; then echo "First API call was successful." response2=$(curl -s -w '%{http_code}' -o /dev/null 'https://api.second.com/data') if [ $response2 -eq 200 ]; then echo "Second API call was successful." else echo "Error: Second API call failed with status code $response2" exit 1 fi else echo "Error: First API call failed with status code $response1" exit 1 fi ``` While testing, I noticed that even when the first API should return success, my script occasionally reports a failure. I suspect it might be due to how Iโm handling the HTTP response codes. I've also tried adding some sleep commands to space out the calls, but that didnโt seem to resolve the inconsistency. Any advice on how to improve this logic or troubleshoot the issue would be greatly appreciated! Additionally, are there any common pitfalls with using `curl` in this way that I should be aware of? I'm working on a service that needs to handle this. What am I doing wrong? This issue appeared after updating to Bash stable. Thanks, I really appreciate it! I'm working with Bash in a Docker container on Windows 11. Any help would be greatly appreciated!