Skip to main content
All CollectionsGetting started
Uploading logs to Amazon S3 or S3-compatible storage
Uploading logs to Amazon S3 or S3-compatible storage

Import collected logs to your own pipeline or keep a long-term archive

Jordi Giménez avatar
Written by Jordi Giménez
Updated over a week ago

Bugfender has the capability of uploading your logs to an Amazon S3 bucket or another S3-compatible storage, so that you can archive logs for lont-term and perform your own data analysis.

How does it work?

  • Bugfender uploads a file every day, named in the format of YYYY-MM-DD.csv.gz. Days are split at midnight GMT.

  • The file format is the same as the CSV download you can get when you download logs from multiple devices from the Bugfender dashboard.

  • The time of the file upload is variable: you can use a S3 notification to know when a file has been uploaded.

  • Please note, a file might contain logs from previous days if they were generated in the past but received after midnight.

  • The file is compressed in gzip format to save space and download time.

How much does it cost?

This feature is included in our Pro and Premium plans (also in the legacy Business plan). Please bear in mind that Amazon might charge for the usage of the S3 service.

How to set it up?

Here you can find the setup instructions for Amazon S3. The steps will vary if you're using another S3-compatible storage provider.

  • Log in to Amazon Web Services console

  • Create a bucket. Go to S3, press Create Bucket and give your bucket a name:

  • Then we will set up the permissions. Go to IAM > Policies > Create policy, then select Create Your Own Policy, call it s3-bugfender-logs (or whatever you like) and copy and paste the following code, replace <bucket-name> with the name you just chose for your bucket in the previous step:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::<bucket-name>"
        },
        {
            "Action": [
                "s3:PutObject"
            ],
            "Effect": "Allow",
            "Resource": "arn:aws:s3:::<bucket-name>/*"
        }
    ]
}
  • It will look something like this:

  • Then go to Users > Add user and call it bugfender (or whatever you like), tick Programatic access:

  • Then select Attach existing policies directly and tick the policy we just created (s3-bugfender-logs).

  • Then click next, review the results and download the resulting credentials:

Once AWS is set up, go to your Bugfender Dashboard, in the application Settings tab, enter the details in the integration configuration:

Supported storage providers

Bugfender can upload logs to any S3-compatible storage provider, supporting both DNS-style and path-style protocols. For example:

  • Amazon Web Services Simple Storage Service (S3)

  • Wasabi Storage

  • DreamObjects Cloud Storage

  • DigitalOcean Spaces

  • Dunkel Cloud Storage

  • Exoscale Swiss Object Store

  • Scaleway Object Storage

  • Alibaba CloudObject Storage Service (OSS)

  • Oracle Cloud Infrastructure (OCI) Object Storage

  • Filebase

  • Z1 Storage

You can also set up a bucket based on your own disks or other storage backends, like Backblaze B2 Cloud Storage, EMC Atmos, Google Cloud, Microsoft Azure, or OpenStack Swift, using MinIO or S3Proxy.

Tips

Per-session format (deprecated, no longer available):

  • Bugfender uploads the logs in batches, every day around midnight (GMT). Days are split at midnight GMT.

  • One folder is created for every day in the format of YYYY-MM-DD.

  • Each folder contains a set of files, each file representing a session, with the naming format deviceId_sessionId.log.

  • All sessions that have been changed during that day are uploaded. If a session spans across several days it will be uploaded every day until it finishes. Each time the file will contain the full session (not only the logs of that day).

Did this answer your question?