
A few weeks ago, I bought this DNAfx GiT Core by Harley Benton as a first ultra-cheap (89€!) dip in guitar modelling. It’s a cool thingie, but to my surprise the software to edit/control via laptop was only available on Windows/macOS…
This is the story of how I decided to write my own open source editor for it, by reverse engineering the protocol, and how I used that to open the door to more interesting opportunities, like using MIDI to automate tasks on the device. Notice that, while I’ll focus on the Core, this should actually work with other DNAfx GiT devices too (I just haven’t tested them).
The motivation
I was intrigued by the DNAfx GiT Core after watching a video review by Ola Englund. It looked like a really cool device in a tiny package, and most importantly very cheap (89€ on the Thomann website, at the time of writing). I had briefly considered getting a Kemper, sooner or later, to replace my old and beloved Line6 POD X3 Live for all my guitar needs, but those are just too expensive… As such, the DNAfx GiT Core was a really interesting thing to start playing a bit with guitar modeling from a different perspective.
Out of the box, and excluding the “regular” audio inputs/outputs, the device provides connection with a laptop using two ways:
- a USB interface that allows software to remotely control/edit the device functionality (e.g., editing/adding/removing presets);
- a USB-C interface (called OTG, On-The-Go), to expose the device as a soundcard the laptop can get the processed audio from in real-time.
I use Linux as my daily driver, and when I connected to the device using the OTG interface, that worked out of the box. Pipewire recognized it as a capture device, and I could get live audio from it. Unfortunately that audio came with a bit of delay (which apparently, as per an exchange with Thomann, is to be expected), but that wasn’t a big deal, as I could get audio from the device using the generic stereo output and an external soundcard instead.
I wasn’t as lucky with the USB interface, instead. Linux did recognize it as a HID device, but there was nothing I could do with it. Harley Benton does provide an official software editor for this device, but it’s only available for Windows and macOS. I tried using Wine to install and use it, but all I got was a “No device detected” error: USB support is known to be a bit flaky in Wine, so that was to be expected. At that point, I tried using a VM instead, as that had worked nicely with my X3 Live. Unfortunately, despite properly configuring USB redirection, I got the same error there too, no matter which VM I tried (qemu or VirtualBox) or Windows version (7 or 11). That meant I had no way, on Linux, to edit the device as it was meant to, using the editor you can see below.

Now, that doesn’t mean the device was totally useless. It does have hardware knobs and controls, which allow you to switch presets, and even edit them there. That said, while it’s not hard, it’s a tedious process. Besides, with 200 presets and just a single up-down way to switch presets, it can take forever just to select the one you like.
This was motivation enough for me to start looking into alternatives, possibly open source. Starting with an important question: did anybody start tackling this challenge already?
A bit of research
Browsing on GitHub, the first related effort I stumbled upon was a repo called hbdnafx-git by a French developer named Bruno Gillet. The project seemed very early stages, but it had a ton of documentation, detailing Bruno’s dive in the USB protocol and the preset format. Just reading his material there was very exciting, as it meant it was indeed doable!
In terms of code, though, there wasn’t much. The python code contained a couple of examples that showed basic startup messages, including what looked like a way to “greet” the device (initialization, maybe?) and a way to retrieve the list of presets from the device. Some time later, when I got access to a Windows laptop to do some USB captures with Wireshark, I found out those were the same exact messages the official editor sent as soon as you started it. As a matter of fact, I could see the same messages when capturing traffic from my VMs as well (why that didn’t work, I don’t know). But as far as more functionality were concerned, there was nothing else that could be of use. I tried enquiring about the project status, but I got no response: from a response to an earlier issue, though, it was clear Bruno had stopped working on this for a while, and it wasn’t clear when he might be able to get back to it.
Shortly thereafter, I found a different repo, still in python, called DNAfx_GiT_CLI by a fellow Italian developer called Niccolò Bordoni. This repo also aimed at talking to the device, but with a different approach, and with the aim of providing a CLI to edit it programmatically. That was another really cool effort worth having a look at! Talking to Niccolò on Github, he told me about his interest in the looper functionality, and how he got the “change preset” functionality working by analyzing Wireshark captures made using the official editor on Windows. Looking at the examples, I soon managed to find a pattern that made sense, and that too got me really intrigued: maybe I could try and do something myself as well, starting from Bruno’s and Niccolò’s efforts?
Time to roll up my sleeves!
Studying USB (and libusb) for a new editor
Both repos were in python, which is a language I’m not familiar with. I’m proficient with C, though, which I use everyday for work, and so I decided to have a look at how I could implement a basic application that could use libusb to talk to the device. I won’t bore you with the details, but long story short, after tinkering with the synchronous and asynchronous patterns in libusb, I did manage to send messages to the device and getting responses back.
The first “important” message I implemented was indeed the “change preset” functionality, as that solved an important issue: moving between presets quickly, without pressing the hardware buttons dozens of times! Step after step, I started implementing some of the additional messages as well. I mentioned how, at a certain time, I had managed to get some USB captures of my own using the official editor on a Windows machine, and that gave me some additional material for investigation and reverse engineering. That’s how I figured out a bit how the official editor sends a couple of initialization messages at startup (followed by a response from the device with info on the firmware), and what messages are sent to retrieve the full list of presets from the device. The one below, for instance, is the message I’m sending to change presets, specifically to switch to preset 59
(hex code 3B
, a preset called “SLOW2”).

The latter was particularly interesting, as it allowed me to also start exploring something else: the binary format of the presets themselves. The device comes with a ton of settings you can configure, with nine different effects each with their own degree of customization. Figuring out how that maps to what’s sent on the wire would have been particularly important, of course, since it might be the door to unlock editing capabilities (e.g., uploading a new preset from my laptop, or editing an existing one). Bruno’s repo came with some excellent information on that regard, especially with respect to the mapping of numeric identifiers to named properties. The details on the binary format ended up being a bit different, instead, which forced me to do some more digging in order to figure out the right one.
Once I figured out the binary format, I started analysing the “official” preset format as well, which are PHB files. These PHB files are basically JSON with a custom and fixed structure, using specific names for the different properties. This is the format the official editor accepts when you want to share/upload one, and how existing presets are saved when you want to store them in your computer. An example of a PHB file just for reference is provided below:
{
"Exp": {
"FUN_SWITCH": 0,
"MODULE_CTRL": 0,
"PARA_CTRL": 1,
"VOL_MAX": 100,
"VOL_MIN": 0,
"VOL_SWITCH": 0
},
"effectModule": {
"AMP": {
"Data": {
"BASS": 76,
"GAIN": 25,
"MID": 75,
"MST": 100,
"PRES": 54,
"TREBLE": 31
},
"SWITCH": 1,
"TYPE": 8
},
"CAB": {
"Data": {
"CENTER": 10,
"DISTANCE": 50,
"MIC": 6,
"TUBE": 1
},
"SWITCH": 1,
"TYPE": 5
},
"DELAY": {
"Data": {
"F.BACK": 3,
"LEVEL": 20,
"SUB-D": 0,
"TIME": 306
},
"SWITCH": 1,
"TYPE": 0
},
"DS/OD": {
"Data": {
"GAIN": 30,
"TONE": 50,
"VOLUME": 50
},
"SWITCH": 1,
"TYPE": 18
},
"EQ": {
"Data": {
"1.6KHz": 21,
"100Hz": 16,
"250Hz": 16,
"4KHz": 18,
"630Hz": 18
},
"SWITCH": 1,
"TYPE": 0
},
"FX/COMP": {
"Data": {
"LEVEL": 50,
"PEAK": 50,
"POSITION": 50,
"Q": 50
},
"SWITCH": 0,
"TYPE": 0
},
"MOD": {
"Data": {
"DEPTH": 50,
"LEVEL": 50,
"RATE": 50
},
"SWITCH": 0,
"TYPE": 0
},
"NS GATE": {
"Data": {
"THRES": 20
},
"SWITCH": 0,
"TYPE": 0
},
"REVERB": {
"Data": {
"DECAY": 23,
"LEVEL": 25,
"P.DELAY": 44,
"TONE": 25
},
"SWITCH": 1,
"TYPE": 1
}
},
"fileInfo": {
"app": "HB100 Edit",
"app_version": "V1.0.0",
"device": "HB100",
"device_version": "V1.0.0",
"preset_name": "GARY MOORE ",
"schema": "HB100 Preset"
}
}
With the help of Bruno’s document and the device screen, I eventually managed to figure out all the mappings, in order to be able to translate a preset in binary format to PHB (e.g., to keep an archive of all stock presets), as well as to translate a PHB file to binary format instead (which would have been needed to upload a preset downloaded on the internet, e.g., here, to the device).
Uploading a preset was indeed a fun challenge. All presets are 184 bytes (even though not all of those bytes are used). Inspecting some of the captures I had taken, I noticed the editor would always upload a preset in 4 chunks of 64 bytes each:
- the first would just be something of a “I’m about to upload a preset” kind of command, so always the same;
- the second would contain a fixed prefix followed by the first part of the preset itself, including the slot to put it into and its name;
- the third would contain more of the preset, prefixed by a
0x3f
byte for framing;
- the fourth and last one would contain the last data from the preset, prefixed by a
0x28
byte to indicate the end of framing.
You can imagine my joy when, after a few attempts, I finally managed to successfully upload a preset to the right slot. I had finally achieved the basic for some actual editing from Linux!
Now that I had something working, I could finally release the project as open source: you can find it, with instructions on how to compile and use it, on GitHub.
Let’s make it interactive, shall we?
All I had done so far worked great, but was not interactive at all: I exposed all the functionality as command line arguments, meaning that the tool would do what you asked it to, and then either go idle or simply shut down. While that has its uses (it’s very easy to integrate as part of some shell scripts to trigger specific actions, for instance), it’s far from ideal when you want something that can more programmatically interact with the device in an asynchronous way. I knew I wanted at least two different ways of interacting with the device: some sort of CLI, and a few network interfaces too, if possible. I decided to start with the CLI, as sitting in front of my laptop most of the times, it’s what made the most sense as a starting point.
The plan was to have something like the usual CLIs we’re used to: something that can parse commands, possibly with history, autocompletion, up and down to go to previous commands, etc. Rather than coding it myself from scratch, I had a look at at what options were available in the open source space, and after a few I stumbled upon a project called EmbeddedCLI, a C project that seemed very easy to integrate in existing projects with no additional dependencies. The demo looked great too, since it seemed to provide exactly what I was looking for!
When I integrated it, I only got a partial success, though… parsing commands did indeed work, but the up/down functionality definitely didn’t (control codes would appear on the screen instead), and editing a command didn’t work either. I’m pretty sure that’s my fault, especially in terms of how the generic output of the tool is printed to the screen (which probably interferes with whatever mechanism exists that makes terminals support those features), but I haven’t investigated that yet. I didn’t care much, though: what mattered was having a way to insert commands dynamically on the console, and that worked beautifully.

At the time of writing, the supported commands are the following:
help
Show this message
init
Send the initialization messages
get-presets
Retrieve the presets from the device
get-extras
Retrieve the list of extras from the device
change-preset <number>
Change the active preset on the device
rename-preset <slot> "<name>"
Rename an existing preset on the device
upload-preset "<name>" <slot>
Upload a named preset to the specified slot on the device
import-preset <binary|phb> "filename"
Import the specified binary or PHB preset
parse-preset <number>|"name"
Prints the content of the specified preset
export-preset <number>|"name" <binary|phb> ["filename"]
Export the specified preset as a binary of PHB file
list-presets
Prints the list of known presets
quit
Close the editor
The basics were all there, and they mostly worked, so the next step was figuring out how to allow remote endpoints to use the tool to interact with the device as well, rather than just using the CLI or command line arguments.
Adding HTTP and WebSocket support
As part of my daily job, I work prominently with network protocols. As a matter of fact, I use technologies like WebRTC and QUIC daily, as well as generic protocols like HTTP and WebSocket. The latters made the most sense as a way to interact with the device, as they’re standard and widespread protocols that pretty much everyone understands and uses. It would definitely pave the way for browsers and other endpoints to interact with the editor, for instance, opening the door to a ton of interesting opportunities.
A library I’ve used often in many projects is called libwebsockets. Unlike the name may suggest, it does not only provide WebSocket support, but HTTP as well, which made it a perfect candidate for exposing both as an interface in the tool. Having worked with the library in the past already, integrating it into the editor was a relatively straightforward process. Most of the effort was refactoring the tool a bit to allow for tasks to return data as well, rather than just print stuff on the screen (which was enough for the CLI or command line arguments before). I also had to take into account some slight multithreading, since just integrating the libwebsockets loop in the loop we had already would have required too much effort that, if I’m being honest, I was just too lazy to tackle.
Long story short, eventually I implemented everything, and added the required configuration options to expose both HTTP and WebSocket as a way to control the editor. At the time of writing, you can only specify the port to bind to: when you do, both HTTP and WebSocket are automatically enabled. In the future, it may make sense to only espose one of the two,and possibly expose some additional options as well: HTTPS and secure WebSockets are an idea (but is that really needed, when you can have a proxy like nginx do that for you?), authentication may be another, or binding to specific interfaces rather than all of them. None of those are a big priority, though: what mattered was having network access available, so that’s enough for now.
Having HTTP access is particularly interesting, as it allows you, for instance, to use curl one-liners to send commands to the editor without the need to establish a session. Just as with command line arguments, this is an excellent option to use as part of, e.g., bash scripts (especially those that are not colocated with the editor), or external applications. This, for instance, is a request we can send to tell the device to switch to preset 59
:
curl -v -d '{"request": "change-preset", "arguments": [ "59" ]}' http://127.0.0.1:8000/
If you imagine this as part of some web based UI, it already makes the process of quickly changing presets much much easier. There’s of course much more we could do (the functionality exposed via HTTP/WS is the same as the one available via the CLI), but let’s see how we can already take advantage of simple preset switching in a more practical scenario.
Next step: MIDI!
Most devices in the music world are usually controllable, one way or another, using MIDI. The reason is simple: MIDI is a standard protocol, and used pretty much everywhere for controlling music equipment, and so it’s common seeing it as a control protocol too, rather than just a way of “serializing” music notes to play.
A typical example of that is the Behringer FCB1010, which is a MIDI foot controller. It has a dozen of foot pedals, plus a couple of expression pedals as well, all of which can be assigned a MIDI control code to send when interacted with. This makes it easy, for instance, to map foot pedal #6
to send a specific MIDI command to an external device: if that external device is configured to react to that in a specific way (e.g., change the preset in use, or enable/disable a specific effect), then the end result is you pressing a button with your foot and your guitar sound changing accordingly. Very simple, and yet very effective!

That’s indeed what I got this pedalboard for, ages ago. At the time, I had a Pod Pro as my guitar multieffect, and I configured the pedalboard to associate buttons to some of my favourite presets on the box. At the time I sometimes played in a garage with friends too, so this was really helpful.
Now that the pedalboard is basically gathering dust (since my X3 Live has an integrated pedalboard for switching), I thought, why not try and use it for the DNAfx GiT Core instead? As we already mentioned, the DNAfx doesn’t have any MIDI support at all, but now we have an editor to control it, and an editor with a network API as well. This means all we need to do is create a thin layer that can bridge the two: receive MIDI commands on one side, and interact with the device (via our editor) using either HTTP or WebSocket.
To keep things simple, I decided to write a basic Node.js application to deal with that. In fact, while it may look like adding native MIDI support to the editor would be the “best” option, there are a few drawbacks to that: first of all, it means adding custom C code to the tool, which is never straigthforward; and, most importantly, who’s to say what people may want to do with MIDI? There may be some that want a single MIDI command to actually trigger multiple actions, on the same device or on multiple of them at the same time. This means that having something else to act as a “middle-man” can provide more flexibility to the scenario. This isn’t to say I’ll never add MIDI to the editor, of course: just that, for now, it made sense to decouple the two instead, and leverage the existing interfaces for some experiments.

I used the easymidi library on Node.js, as it makes receiving MIDI really easy. This is a snippet from the code I have in the demo on the repo:
const easymidi = require('easymidi');
const midi = new easymidi.Input('DNAfx GiT MIDI Controller', true);
midi.on('cc', function(cc) {
console.log('Got MIDI message:', cc);
// Do something with it
});
In the demo I have in the repo, I made the behaviour customizable, by basically allowing you to specify in a configuration file (settings.js
) which CC messages to intercept, and depending on its value, what action to perform. At the moment, only a single action is supported (“Send an HTTP request with this JSON payload”), which is enough as a proof-of-concept. The sample configuration file looks like this:
module.exports = {
// MIDI settings
midi: {
cc15: {
0: {
http: { request: 'change-preset', arguments: ['200'] },
},
1: {
http: { request: 'change-preset', arguments: ['1'] },
},
2: {
http: { request: 'change-preset', arguments: ['35'] },
},
3: {
http: { request: 'change-preset', arguments: ['59'] },
}
}
},
// DNAfx editor backend
http: {
host: '127.0.0.1',
port: 8000
}
};
which means we’re intercepting CC 15
, and then, depending on the value, sending different change-preset
messages via HTTP to http://127.0.0.1:8000
(which is where our editor is supposed to be waiting). A simple way to test this is by using the sendmidi
command line application, e.g.:
sendmidi dev "DNAfx GiT MIDI Controller" cc 15 3
which in our MIDI application will trigger the following behaviour:
Got MIDI message: { channel: 0, controller: 15, value: 3, _type: 'cc' }
-- Associated action: { http: { request: 'change-preset', arguments: [ '59' ] } }
and an HTTP request to our editor with that payload:
[HTTP] {"request":"change-preset","arguments":["59"]}
At this point I tried bringing in the FCB1010: the idea was to see if I could map some of its foot pedals to the actions we configured here, of course after configuring a connection between the pedalboard and my MIDI application. I had underestimated a tiny aspect, though… I haven’t used the pedalboard in ages, and I have no idea how to configure it anymore! The editors I’ve found online seem to only partly work: not sure if at the time I messed with its firmware, by installing something more customized (I’ve seen UNO is an option, for instance).
Long story short, I decided to first of all just connect the pedalboard to my MIDI application, which if you use a tool like qjackctl
(or any other Pipewire equivalent) can be done like this:

Then I tried watching what CC messages the pedalboard was sending when I pressed random buttons (so whatever I configured the last time I used it) and decided to adapt my Node.js application configuration file accordingly. I noticed that when I pressed buttons 1 to 4, I’d get:
- CC
43
set to 127
or 0
;
- CC
22
set to 127
or 0
;
- CC
25
set to 127
or 0
;
- CC
26
set to 127
or 0
.
As such, I simply added more CC handlers to my configuration file, making it look like this:
[..]
cc43: {
127: {
http: { request: 'change-preset', arguments: ['200'] },
}
},
cc22: {
127: {
http: { request: 'change-preset', arguments: ['1'] },
}
},
cc25: {
127: {
http: { request: 'change-preset', arguments: ['35'] },
}
},
cc26: {
127: {
http: { request: 'change-preset', arguments: ['59'] },
}
}
[..]
In a nutshell, any time one of the above mentioned footswitches is selected, the related CC is sent, and the MIDI application will react to that by contacting my DNAfx editor accordingly.
Does that work? Let’s see!
Eureka! Suboptimal because of the poor configuration of the pedalboard itself (since I just reused what was there 15 years ago), but as a proof-of-concept, a demonstration that this kind of interaction is indeed doable.
So much potential! Especially if you consider that in this example we’re just changing preset, but should I in the future manage to expose APIs also to trigger specific effects, this communication paradigm could also be used to add “stompbox” functionality through the pedalboard: e.g., you use preset buttons to change preset, and other buttons to disable/enable the delay and/or something else.
What’s next, then?
Even though this looks like a lot, there’s a ton still missing, especially in terms of features. I definitely haven’t implemented all the supported messages (how can you tweak an individual setting without uploading a preset in its entirety, for instance?), and the management of events triggered by the device is also missing (which I suspect is the cause of some “stalled” messages I get from time to time).
Apart from considerations related to the USB part of the communication, I think there’s a lot more that can be done to make this effort even more usable and helpful. I showed, for instance, how I leveraged the existing interfaces to implicity “expose” the device to the MIDI world, allowing me to use a MIDI pedalboard to trigger actions on a device that doesn’t know a thing about MIDI at all, but there’s so much more we may want to do. There definitely are endless possibilities opened by the flexile nature of the APIs and the standard protocols involved.
I know Niccolò has been working on an Android application leveraving his stack to control the device from his phone, for instance, and that’s definitely something that might be cool to do with my editor as well: the HTTP/WebSocket support definitely makes this possible, and in the future it may even make sense to investigate if Bluetooth could help in that regard. Just imagine a tiny Raspberry coupled with your device, and acting as an “avatar” you can talk to and that will enforce your remote commands and keep you updated. Even without going to the mobile world, a web interface to the editor would be quite helpful as well. A native desktop client, maybe based on GTK, could be nice, but it would be hard to implement. Writing a web app that provides the same functionality and relies on HTTP/WS to talk to the device would be much simpler, and could run in a browser (on a desktop or even on a tablet/mobile).
All that, of course, is just wishful thinking for the moment. It’s things I’d like to work on, sooner or later, but this is a hobby and something I’m doing for fun. I doubt I’ll end up implementing much I won’t have an actual need for myself, so hopefully more people will get interested in this and start contributing ideas and code, sooner or later.
Feedback welcome!
So far this has been a ton of fun, but apart from Niccoló, I haven’t really met anyone else interested in this device and with the same requirements/constraints I have that I could exchange ideas with. It would be great to find more people interested in this, and, why not, hopefully build something more with from this foundation. Please feel free to reach out on Mastodon or Bluesky for any question or comment related to this!