Can you provide a code sample of a bash script that downloads a text file from a given url, removes all lines beginning with a `#` then treats every line of that file as a URL, downloads them all appending to a single file.
Here's a bash script that achieves what you've described:

```bash
#!/bin/bash

# Input URL containing the list of URLs
input_url=https://example.com/list_of_urls.txt

# Temporary file for filtered URLs
filtered_urls_file=filtered_urls.txt

# Output file containing all the downloaded content
output_file=downloaded_data.txt

# Download the list of URLs and remove lines starting with #
curl -s $input_url | grep -v '^#' > $filtered_urls_file

# Download the content of each URL and append it to the output file
while read -r url; do
  echo Downloading content from: $url
  curl -s $url >> $output_file
  echo -e \n >> $output_file # Add a newline separator between each downloaded content
done < $filtered_urls_file

# Clean up
rm $filtered_urls_file

echo All content downloaded and saved to $output_file
```

Replace the input_url variable with the actual URL containing the list of URLs. The script will download the content from each URL and append it to downloaded_data.txt. Make sure to give execution permissions to the script before running it:

```bash
chmod +x download_script.sh
./download_script.sh
```
Can you remove all the comments and make sure the code is well formatted so I can just copy and paste it into my IDE?
Comments removed as requested.


input_url=https://example.com/list_of_urls.txt

filtered_urls_file=filtered_urls.txt


output_file=downloaded_data.txt

curl -s $input_url | grep -v '^#' > $filtered_urls_file

while read -r url; do
  echo Downloading content from: $url
  curl -s $url >> $output_file
  echo -e \n >> $output_file 
done < $filtered_urls_file


rm $filtered_urls_file