Electronics Programming

Raspberry Pi Data Logger with InfluxDB and Grafana

A need popped up at work for a data logger for various lab tasks. Quickly looking at the market, I failed to identify a lab tool for data logging (cheap, easy but powerful setup, remote access); something for researchers and scientists. I decided a Raspberry Pi with some input buffering would be ideal for the task. This is my roll your own data logger, put together on Saturday – showing what is possible quickly and potential with more development time.

Hardware Setup

  • Raspberry Pi (3) – I used a 3 for integrated wifi and speed but any Pi should work.
  • SD Card – A reliable brand as the database will be writing frequently. I used a 16GB SanDisk Extreme
  • Some form of IO buffer or input device. I first developed with a DHT22 temperature/humidity sensor, followed by a potential divider/zener diode digital logic buffer then finally a Pimoroni Automation HAT for analogue voltage. I plan on creating a custom multi-channel buffered 24V logic and 0-10V analogue input card to partner with this, for industrial device testing.

Install InfluxDB and Grafana on Raspberry Pi

A while back I attempted this but ran into problems installing and configuring InfluxDB and Grafana. I had attempted to user Docker but without success. Thankfully, .deb packages now exist in the Debian ‘Stretch’ repository for armhf (Raspberry Pi). You and either add ‘Stretch to your list of sources or download the packages with wget:

# update these URLs by looking at the website:
sudo apt-get update
sudo apt-get upgrade
sudo dpkg -i influxdb_1.0.2+dfsg1-1_armhf.deb
wget # grafana data is a dependancy for grafana
sudo dpkg -i grafana-data_2.6.0+dfsg-3_all.deb
sudo apt-get install -f
sudo dpkg -i grafana_2.6.0+dfsg-3_armhf.deb
sudo apt-get install -f

Setup InfluxDB

Installed via the .deb package, InfluxDB creates a service and enables at startup so there is little more configuration to do. One must create the databases that our scripts will be saving into. This can be done via the web gui. Navigate to your Pi IP on port 8083 (default InfluxDB admin port – http://localhost:8083 if you’re doing this on the Pi). Using the ‘Query Templates’ dropdown, one can create a database with ‘Create Database’ – it’s fairly self explainatory. Make one called ‘logger’ to store our sample data.

InfluxDB Create Database

Create Logging Script

The example below uses Pimoroni’s Automation HAT but can be easily adapted to a GPIO pin or DHT22 using the commented lines. It’s worth having a read of the InfluxDB Key Concepts but essentially, I create a unique tag for each time the script runs called ‘run’. This enables a user to differentiate between test setups with no backend jiggery pockery when we come to Grafana. The measurement name can be set via the command arguments, to enable a user to set different testing sessions, currently by default it is ‘dev’. You’ll need the InfluxDB Python module for this script, which you can install like this:

sudo pip install influxdb

import time
import sys
import datetime
from influxdb import InfluxDBClient
import automationhat
#import RPi.GPIO as GPIO

# Set this variables, influxDB should be localhost on Pi
host = "localhost"
port = 8086
user = "root"
password = "root"

# The database we created
dbname = "logger"
# Sample period (s)
interval = 1

# For GPIO
# channel = 14¬
# GPIO.setmode(GPIO.BCM)¬
# GPIO.setup(channel, GPIO.IN)¬

# Allow user to set session and runno via args otherwise auto-generate
if len(sys.argv) > 1:
    if (len(sys.argv) < 3):
        print "Must define session and runNo!!"
        session = sys.argv[1]
        runNo = sys.argv[2]
    session = "dev"
    now =
    runNo = now.strftime("%Y%m%d%H%M")

print "Session: ", session
print "runNo: ", runNo

# Create the InfluxDB object
client = InfluxDBClient(host, port, user, password, dbname)

# Run until keyboard out
    while True:
        # This gets a dict of the three values
        vsense =
        op =
        # gpio = GPIO.input(channel)
        print vsense
        print op
        iso = time.ctime()

        json_body = [
          "measurement": session,
              "tags": {
                  "run": runNo,
              "time": iso,
              "fields": {
                  "op1" : op['one'], "op2" : op['two'], "op3" : op['three'],
                  "vsense1" : vsense['one'],"vsense2" : vsense['two'],"vsense3" : vsense['three']
                  # ,"gpio" : gpio

        # Write JSON to InfluxDB
        # Wait for next sample

except KeyboardInterrupt:

Save the script as ‘’ and run using python To run the script in the background and indefinately as a ssh user use nohup python &.

Setup Grafana

Unlike InfluxDB, Grafana doesn’t enable it’s service, so do this to enable at boot and start the service now:

sudo systemctl enable grafana-server
sudo systemctl start grafana-server

Navigating to port 3000, you should be presented with the Grafana web GUI. Grafana has plenty of powerful querying tools to make lots of pretty and informative graphs. Have a read of the Getting Started, and in particular the InfluxDB section.

The first thing you will need to do is add the ‘logger’ database as a Datasource. Navigate to Datasource->Add New and fill in as below

Create the 'logger' InfluxDB database as a Datasource
Create the ‘logger’ InfluxDB database as a Datasource

After this, you need to setup a Dashboard. Click the top dropdown button, press ‘New’ and create a dashboard called ‘logger’. The dashboards consist of rows. Clicking on the green bar on the left-hand edge provides a dropdown where one can add graphs etc – again, the Getting Started explains this better than me. Below are a couple of captures of how I setup the Automation HAT.

Log Away

With this short introduction you should have the framework for a basic logging device and see the potential for real-world analytics of almost anything! I’m quite excited using the combination of a time based database like InfluxDB and powerful web graphing package Grafana.

39 replies on “Raspberry Pi Data Logger with InfluxDB and Grafana”

Many thanks for documenting your “logger” software. This is the only thing I could find and is very similar to what I was hoping to attempt.
I am getting a missing parenthesis error at the very end of the liine “print “Must define session and runNo”. I know it must be something very simple but it has got me totally baffled.

Allow user to set session and runno via args otherwise auto-generate

if len(sys.argv) > 1:
if (len(sys.argv) < 3):
print “Must define session and runNo”

I wouid really like to use you program to do some logging of growing systems at our local YMCA and Fire Station, so I would be grateful if you could help.

Paul Thompson (Community Roots)

Hi Paul. Sorry I’m away at the moment and have been for the last couple of weeks. I haven’t got a test bed to check what your problem is but sounds like a copy and paste problem causing the code syntax to be incorrect. Check all the indentations and characters are right.

The only .deb file on the Influx download page is a 64 bit version which won’t install on the current (32-bit) Raspbian: “package architecture (amd64) does not match system (armhf)”. How did you install InfluxDB on the Raspi?

I used a RPi 3 so didn’t encounter this issue (since it’s 64bit). If there isn’t a 32bit package, you might be able to use Docker or you’ll have to get the source and compile it yourself. The easiest option would be to get a RPi 3!

Thanks for the quick reply. I am actually using a RPi 3 with the jessie lite image dated 2016-11-25, but it is still 32-bit, as getconf LONG_BIT shows. However, besides the standard download page, there is also where a 32-bit repo is linked. After adding this repo, I first destroyed my apt-get because the new repo can only be accessed via https. That can be fixed by sudo apt-get install apt-transport-https and then influxdata installs easily on 32-bits. Just leaving this info here in case it heöps ohers. Adventures in pi, how we love it 🙂

Yes, it looks like a stable release has been added to the main repository in the 6 months since I did this, so the installation steps may well be much simpler. I’ll review and update the post when I get time.

Hi John,
I followed instructions, but when I write: “sudo systemctl enable grafana-server”, I have a message error: “Failed to execute operation: No such file or directory”. Why ? How I to do ?

I have the same problem. and “sudo systemctl start grafana-server” doesn’t give an error, but nothing happens on port 3000
sudo systemctl status grafana-server gives:
â grafana.service – Starts and stops a single grafana instance on this system
Loaded: loaded (/lib/systemd/system/grafana.service; enabled)
Active: failed (Result: signal) since Mon 2017-01-30 19:21:41 CET; 9min ago
Process: 915 ExecStart=/usr/sbin/grafana –config=${CONF_FILE} cfg:default.paths.logs=${LOG_DIR} =${DATA_DIR} (code=killed, signal=SEGV)
Main PID: 915 (code=killed, signal=SEGV)

Jan 30 19:21:40 raspOTGW systemd[1]: Started Starts and stops a single grafana instance on this system.
Jan 30 19:21:41 raspOTGW systemd[1]: grafana.service: main process exited, code=killed, status=11/SEGV
Jan 30 19:21:41 raspOTGW systemd[1]: Unit grafana.service entered failed state.

And /usr/sbin/grafana gives a segmentation fault

Any help anyone?

First, thanks for your write-up. I was actually just starting from scratch on writing data to a MySQL database and then was going to work towards graphing… Anyways, thanks! This has helped kick start my project. The only thing that I’m stumped with and hoping you can explain, is why are you using a “potential divider/zener diode digital logic buffer”? Is this applied to the yellow wire to the buffered input on the automation hat to the DHT22? I am currently reading the DHT22/AM2302 off of the breakout GPIO pin because all I get is “1” from the buffered input read, instead of the temp/hum?

Thanks, glad it helped. I also started with MySQL before ending here!

The digital logic buffer was another input method I used for an external on/off signal I wanted to log, completely separate from the DHT22 logging; don’t used it with the DHT22, you’ll get a single value as you encountered.

Thanks for the clarification, just learning the depths of electronics and couldn’t figure out why you mentioned the divider. Again, Excellent write up and thanks!

Hi John,

Thank you for this tutorial. I have influxdb and Grafana working just fine, but not with my data yet. Trying to set up the Python script for a DHT22 on GPIO 4. Here is my edited script that does not work:

Any ideas?

Thank you for the article.
However, which is very frustrating, there are no mentions of logins and passwords to InfluxDB and Grafana!
I spent some time trying to figure out Grafana default login and password and then became stuck because I don’t know what user and password are to be used with InfluxDB when adding it to Grafana.

The point is that Grafana requires user and password fields for InfluxDB data source to be filled when adding the data source. But there are no mentions of anything like those in the article.
This article shows up first in Google search when looking for “raspberry pi influxdb grafana”. To a complete newbie like me it has now been a second day of glowing red butthurt frustration because nothing was as described.
The last thing that I had to fight was that iso = time.ctime() directive in source code on line 52 is not enough, because in reality the time format that InfluxDB would accept is different!
time=datetime.datetime.utcnow().isoformat() + 'Z' worked for me. Without it there is either no data in InfluxDB or it is seconds from Epoch. would have been enough 😉
As to the article: John has provided us, for free, with a great starting point for own work and saved hours. There is a reason why the article leads your Google search. Yet is only one article in one blog. If it’s deficiencies hurt and you feel it would be important, why not doing better? Feel, e.g., free to start a little RpIG-documentation project on an open platform like GitHub where everybody can contribute via issues and pull requests. You can even start with John’s text when giving it proper attribution because John put his text under a Creative Commons license.

Thanks for supporting Max Sara and also understanding enthusiast blogs like this can’t cover everything! Sorry you had frustrations Max but believe me, I had many frustrations getting this working – it’s why I made the post! Hopefully it’s provides the framework for people to set up their own but tweaking and user learning is inevitable.

For reference of others, the Grafana InfluxDB link users the default DB master user ‘admin’/password ‘admin’ – it’s in the screenshot within the blog. The script uses the default InfluxDB admin ‘root’/’root’. I didn’t cover many of the Grafana and InfluxDB concepts as they are well covered in the Getting Started guides linked.

Hi, thanks for the tutorial. I’m getting the following error while trying to run the python script:

from influxdb import InfluxDBClient
ImportError: No module named influxdb

I’m able to access the web gui for Influx and access grafana (SSH into Pi in both cases), but the script isn’t able to run successfully. I’m using the latest Jessie image on the Pi 3. I’m assuming it’s Python 2.7? Is there anything that I may have missed?

Thanks John. Turns out I was installing it for a different Python version.

I have a few questions regarding this setup, just looking for your feedback here:

How well has this setup worked over time as the database grew in size? Running grafana on a Pi sounds like it may run out of resources over time. If I have to pull a month worth of data in a graph, would it crash?
The Grafana version appears to be different than the one you can install on your desktop, which has many more features. Is this because this is an ARM specific version that ‘needs’ to be light so it can run on a Pi?
Is there a way to export the data from the db in csv format from grafana or influx?
I could never get this line working: sudo systemctl enable grafana-server. For some reason, it keeps saying grafana-server not found, yet, I can ways access grafana on boot without ever executing that line. Using the same version in the tutorial. Any idea what’s going on there?

Thanks again for all the help.

InfluxDB has options for the data retention and duration policy, which can be configured from the admin panel. Have a read of the docs here: It will run out of resources by default so you should tune it to your application.

Yes, I expect it is lighter for the RPi. I’m sure data can be exported as CSV but your best bet for that is Google!

Hi there – thank you so much for putting this together. I was able to dust off an old RPi2 for some data acquisition via 0-5v sensors and this is exactly my objective. The only thing I can’t seem to figure out is the time stamping in Grafana is minus 4 hours compared to the time stamps in the InfluxDB records. I’m getting good NTP sync and when you type the date into the linux shell, it spits out the right time/day. Any suggestions?

Well – found out there are some issues when you have your system timezone not set to UTC. Changed the OS timezone to UTC and changed the dashboard time setting in Grafana from “browser” to “utc” – everything lines up! Thanks again!

I was going to suggest this was the case, given it was exactly 4 hours but you got there before me! Thanks for the comments and glad you’ve got it all going.

Hi John, have you had any issues with running long term (multiple days) logging? Everything is working great but the python process to read data from the automationhat just dies randomly for me after 24hrs or so. I have telegraf running in parallel and it seems to be stably reporting system stats to InfluxDB. Any suggestions on what logs to check?

I ran it for a few days and didn’t have any problems. Haven’t actually used it since so no long term experience. You could try assigning the script as a service so that if it does crash it will restart. From the comment below however, it sounds like there could be a memory leak.

John can you comment on InfluxDB memory usage? I’m experimenting with InfluxDB and when I try to import my historical readings from my SQLite database into InfluxDB, the InfluxDB process uses a massive amount of memory and is eventually killed by the kernel.

Did you have to do any tweaking to the InfluxDB server config? Are you noticing high memory usage by the influxd process?

I’m running on the ODROID-C1 (1GB RAM), and influxd easily consumes 500-700MB before the system runs out of memory and influxd is killed. I’m also using python and client.write_points() to insert the data. I tried adding a smaller batch_size to write_points, but influxd is still too RAM hungry…

It would be nice to find a solution, especially since I’m already waiting a long time for InfluxDB to insert 5000 points (average is 110 seconds per write_points call).

I didn’t have a problem with memory usage, although I only used it for a couple of days at a time and had no need to check. Readings NP’s comment above re-crashing, it sounds like you are not the only one.

He said that Telegraf (system metrics) runs happily, maybe try running that for a few days and see if you encounter the same memory usage problems. If you don’t, could be the Python InfuxDB API or a database/server/client config mis-setup. If you find anything please let me know.

Thanks for the reply. I should have been more specific: I am using my own program to migrate data from SQLite to InfluxDB, so my issue is not related to NP’s comment about python crashing.

I guess I’ll keep poking. Seems to be an issue on the Influx server side.


I run raspbian stretch on raspi 3 with influxdb (1.3.5) now for 7 days.
I experienced the same: high memory, automatically killed by OS, high CPU.
Played around with config params, but still no solution.

@JBR, did you have found a solution already ?

many thanks in advance


I havae the dht22 connected to my raspberry pi and set everthing up but I can’t get the example script to work with the dht22 sensor. Does anyone has an python example script for the dht22 sensor on the pi?

Leave a Reply