A friend of mine recently worked on a search for a system that used a triplestore.
The problem was a simple one: Find all the stuff with a given string in it, even if the string isn't a whole word.
They ended up with about 5 million
(id, text) pairs, used Elasticsearch and the queries are usually faster than 1 second.
I don't know the details of what else is going on (ranking, sorting, etc.) so please don't take this a representative performance of Elasticsearch. ;)
But I asked myself:
How fast would that be if we brute-force it with
What if we write all the
(id, text) pairs out to a file, one pair per line.
Then use the
grep command line program to get all lines that contain a specific substring.
And from those matching lines we can extract the IDs. A few lines should do the trick, right?
For how much data would this be a good sweet spot of complexity vs. performance (read: The right tool for the job)?
So I wrote a microbenchmark. Because what else do you do over Christmas? :)
In case you couldn't tell already: This is going to be something only programmers can obsess over. You have been warned. ;)
Just in case you only want the gist of it and don't care about the details:
- Tested a file with 10 million lines (1.4 GiByte), each line about 150 bytes. Also tested a few other sizes but the performance didn't change much.
- The tests were run on a notebook with an Intel i7-8550U @1.80GHz with 32 GByte RAM running Linux 5.3.0
- The entire file fits into memory (is kept in the kernel page cache) so we don't have to wait for the SSD to read the data.
Now on to the data itself:
grep can handle 2.3 GiByte/s. If you have just 1 million searchable records, each about 150 bytes long that would be about 145 MiByte of data.
grep will chew through that in about 63ms. Pretty fast, isn't it?
mawk are slower, but still pretty fast.
I wanted to know how fast a naive implementation of a search would be in different languages.
Please don't take this as a performance comparison.
There are many different reasons to choose a language for a project, and performance of the language itself usually isn't a terribly important one.
I usually work with different languages at the same time (use the right tool for the job) and I wanted to know what performance I can expect in a given environment. :)
- The PHP, Python and Ruby programs that search each line individually are somewhat slower. But still in the hundreds of MiBytes/s on the test system.
- The last tree programs load the entire file into one string (e.g. with
mmap) and search it for matches. This eliminates the line iteration overhead.
A little more complicated but quite a bit faster.
If a search can take 1 second you can chew through about 1 GiByte of data in that time.
I also wrote a C programs that do the same as above.
And since multi-threadding in C is easier than in many interpreted languages I threw some threads at the problem.
Note the different scale.
- C can chew through a lot of data per second. :)
- But honestly I don't think I'll have a project anytime soon where I would sacrifice several GiByte of RAM just to keep a search index (the text file) in memory.
And as soon as the search index isn't in memory the search will be only as fast as the disk can provide the data (1.64 GiByte/s on the test systems SSD).
Note that the Linux kernel only keeps the search index in memory (in the page cache) if there is nothing more useful it could do with the RAM.
If nobody uses the search the kernel won't waste memory for it.
So much for the "short" version. If you're still interested in the details feel free to read on. :)
The test setup
All the programs, scripts and commands used are available in a GitHub repository.
I also put the raw benchmark results in there
(this includes the versions of everything used.
The benchmarks were run with the command
./bench.sh > debug.log 2> results.txt.
To get a reproducible data set I wrote a small ruby script to generate
id: text lines.
The script uses a fixed seed so it always generates the same lines. A small sample:
Pretty much base64 encoded random garbage.
Each line is on average about 150 bytes long, some a lot longer, some a lot shorter.
The 150 byte length doesn't have any deeper meaning. I just picked something.
You might have way fewer documents with a lot more text in them or way more things with just short text.
That's why I give the performance in MiByte/s. It doesn't matter that way.
| lines-100k.txt || 14.4 MiByte |
| lines-1m.txt || 144.9 MiByte |
| lines-10m.txt || 1.4 GiByte |
| lines-50m.txt || 7.2 GiByte |
| lines-100m.txt || 14.3 GiByte |
I generated a few of those files with 100 thousand, 1, 10, 50 and 100 million lines. Just to see if the programs would
The last step was to pick some substring in those lines (e.g. "VrKDJbjoQ9xnzg") and searched for it with different tools.
Each program was executed multiple times and the results were averaged. From 10 times for the small files like
lines-1m.txt down to 2 times for
You can look at bench.sh for the details.
grep each tool is supposed to search a text file for lines that contain the given substring.
No matter where it is or if it's an entire word or just part of a word.
I just tested the tools I had already installed and what struck my fancy.
So don't be surprised if it doesn't include your favorite tool.
I also wanted to know how fast naive implementations of such a search would be in different languages.
That's how the PHP, Ruby, Python and C programs found their way into the benchmark.
Again, not to compare performance but rather to know what to expect in a given environment.
Measuring execution time
/usr/bin/time was used to measure how long it took for a program to go through an entire file (logged the wall time and some other stuff).
This isn't the
time builtin of shells like
zsh but an extra program.
A small caveat here: The
time program only reports times with 10ms precision (e.g.
Therefore I didn't do runs with 100k lines. Some programs were simply to fast and I got division
by zero errors. And I was to lazy to look for a tool to properly read the timing information form
the Linux kernel or write a skript for it.
All that stuff is available under
smaps and more, see the
proc man page.
The test systems
Two systems had the honor of not running fast enough when I had the idea for the benchmark.
So they had to search text files for substrings for the next two days. :)
One PC (can't blame it, it can't run):
Linux 4.15.0-91-generic #92-Ubuntu SMP x86_64
CPU: Intel(R) Core(TM) i5-3570K CPU @ 3.40GHz
RAM: 15.6 GiByte
Memory bandwith: 15.96 GiByte/s (reported by `mbw 2048`)
Disk: 4TB WD Red "WDC WD40EFRX-68WT0N0 (82.00A82)"
Read speed: 103 MiByte/s (measured with `wc -l`)
And one notebook (no excuses here):
Linux 5.3.0-46-generic #38~18.04.1-Ubuntu SMP x86_64
CPU: Intel(R) Core(TM) i7-8550U CPU @ 1.80GHz
RAM: 31.1 GiByte
Memory bandwith: 13.8 GiByte/s (reported by `mbw 2048`)
SSD: 1 TB Samsung "MZVLB1T0HALR-000L7"
Read speed: 1.64 GiByte/s (measured with `wc -l`)
The rest of the article only shows the results of the
lines-10m.txt run (10 million lines, 1.4 GiByte) on the notebook.
It's pretty representative of the other runs. No need to show more data that doesn't say anything interesting.
Funnily enough both CPUs performed pretty much the same. Well, the task at hand isn't exactly black magic, even for older CPUs.
Anyway, the full data of all runs is in the repository.
Assumption: All the data fits into memory
Or rather in the page cache. In case this is new for you:
When you read data from a file the operating system does something nifty.
If it has unused memory left it keeps the data in there.
The next time a program wants to read the same chunk ("page") of the same file the operating system takes it right out of the page cache.
That way it doesn't have to wait for the disk again to get the data.
Well, free memory doesn't help anyone, so might as well use it to cache some data that might help someone in the future. :)
In the benchmarks below I read the file once with
From then on it's in the page cache and all consecutive programs got their data straigt from there.
Thanks to that the benchmarks are not bound by the read performance of the disk.
As long as the file fits into the page cache that is.
If the file is for example larger than the memory your program is always limited by how fast your disk
can provide data (e.g. 1.64 GiByte/s on the notebooks SSD).
Results and findings
Ok, that's it for the "preliminaries". Now on to the real meat: The results and what suprised me.
grep is faster with longer search strings
While testing out
grep I noticed that longer search strings seem to perform better:
Here are the individual commans:
short, 2310 MiByte/s:
$ grep VrKDJbjoQ9xnzg lines-10m.txt
medium, 4177 MiByte/s:
$ grep V6TGqGjjnmhlHRcYEq1IJCgzUNSx09bCkwJnEK lines-10m.txt
long, 5207 MiByte/s:
$ grep FMzaoLmVeEoJ3PAPERDFSO2RFEo5/mO17YTQrXz4jr0Ud9w0854q6/rcRu11AocX3vzl4q7O0f6c lines-10m.txt
At first I was a bit startled by that.
Especially given that
grep doesn't only search for simple substrings but can also search for regular expressions.
But then I realized that the longer the match the smaller the characters in a line where the match could start.
In other words: If we search for a string that is 20 chars long we can skip the last 19 chars in each line.
They can't possibly contain a match. At least that's what I think is happening here.
This might also be a general optimization for regular expressions in
grep. I haven't looked into it.
Update 2020-12-30: It's probably the Boyer–Moore string-search algorithm.
Forgot about that one. :D
In the rest of the article I quoted the performance for the smallest substring.
In my experience long substring searches are somewhat rare. But in the end that depends on your usecase.
awk side of things
I also wanted to test out
awk a bit.
If you've never heard about it:
awk is a command line tool for line based processing.
Perfect for processing logs, CSVs, statistics and stuff like that.
It basically executes scripted actions before processing a file, then on each line that matches a pattern and after processing a file.
It can do much more than just search each line for a substring.
And what's really cool:
awk reads the input line by line.
It doesn't have to load the entire file into memory.
So you can happily crunch gigabytes or terabytes of data all day long without worying about running out of memory.
Well, not that this is reflected in the benchmark since the entire file was in memory when those benchmarks were run. :P
sed into the chart as well because it had no where else to go and felt lonely.
awk seems to be the default version, at least on Ubunt based systems.
The performance is pretty decent and it could handle about 350 MiByte/s on the test system.
mawk is another implementation and uses a bytecode interpreter.
I had it flagged down for a while because it seemed to be quite a bit faster than GNU
Looks like it actually is. :)
Also kind of funny: When I used the
index() function of
mawk to look for a substring instead
of a regular expression it got quite a bit slower (the "mawk index" bar).
Naive line-by-line search in different languages
As mentioned before: I wanted to know what performance I can expect from a naive line-by-line search in different languages.
This might be useful to decide if I go with a simple low-effort brute-force solution.
Or if I have to spend a bit more time on evaluating alternatives (a database, library or whatever).
It all depends on the requirements and environment of a project.
The performance numbers given here are just datapoints that help to figure out how long a simple brute-force line-by-line search is fast enough.
Again: Please don't take this as a performance comparison. Because the world isn't that simple.
I hope you don't choose the programming language you use for a project based on it's naive find-substring-in-line performance. ;)
With all that out of the way, on to the programs. The code is simple enough, here for example in Ruby:
File.open(filename) do |f|
f.each_line do |line|
puts line if line.include? search_term
I wrote the same in PHP, Python and C as well. But we'll look at the C programs later.
The nice thing of this approach is that we only need the memory for one line at a time.
No matter how large the entire file is we won't run out of memory.
As long as the entire file isn't just one big line that is. :D
Pretty decent performance for just a few lines of code.
Probably fast enough for a few hundred MiBytes of data.
But for larger data sets I would probably execute a
grep command if the code only needs to run on Linux.
Search the whole file instead of each line
While playing around with the
strstr() function in C I noticed that the function by itself pretty darn fast.
So wouldn't it be great if we could eliminate the line-iteration overhead and our code could spend all its time in
What if we had the entire file in a single string.
Then we could search this one large string with consecutive
strstr() calls until we've found all the matches and reached the end of the string.
This is more like searching in a file with a text editor than searching each line individually.
Each time we find a match we could search for the start and end of the current line and print the entire line (just like
Then our code wouldn't have to loop over each line, wasting instructions and CPU time doing so.
Might be faster but it would no longer be easy to search for multiple substrings in the same line.
Also the code is more complicated than before, e.g. see ruby2-all-index.rb.
Another slight problem is that we would need the entire contents of the text file in one huge string variable to do that.
But reading several hundred MiBytes kind of sucks: We would go over the entire file just to load it into memory and then go over that memory again to search it.
Well, somewhat obscure (depending on your background) operating system knowledge to the rescue! :)
Operating systems can "map" files into memory.
Basically the operating system reserves a part of your memory (address space to be more exact) that is as large as the target file.
When you access that memory at a given place the operating system automatically reads the corresponding chunk of data directly from the file.
Meaning you can read data from a file just by looking at the right memory address.
And big surprise: A string is just another (usually small) part of your memory.
So with memory mapping we can use the entire file as one large string without having to read it first.
The operating system reads the parts that are used as soon as we access them.
There's more to memory maps than that.
For example the operating system can easily evict seldomly used data chunks and use the memory for more important purposes.
If need be it can always read the data chunks from the disk again.
This can be useful when some other programs quickly need more memory (e.g. when you open a particularly "modern" website in your browser).
Anyway, I gave it a spin and lo and behold now we're talking. :)
PHP and Ruby really shift up a gear here. Easily achieving several times their previous performance.
Depending on the usecase this might be complexity well invested.
Python is pretty interesting here: It has an explicit
mmap() function that maps a file into memory.
The 2nd Python program uses that but otherwise does a line-by-line search just as before.
Just with that it's a bit faster, but this is probably not because of the memory mapping itself.
Usually reading lines from a file in Python returns an
str for each line.
Those are encoding-aware and might do some extra stuff for that.
Reading lines from a memory mapped file tough returns
bytes. Those are just raw data without any encoding.
mmap() I also changed the search from
bytes and I suspect that this causes the difference in processing speed.
Another note about PHP and Ruby:
Both languages don't expose an explicit way to memory map a file.
Of course there are extensions for that but I wanted to keep it simple.
Instead they both have functions to read the entire file into a string (
I hoped those use the C function
fopen() internally because that in turn uses
mmap() and maps the file into memory instead of reading it chunk by chunk.
This worked out in my micro-benchmark here but I wouldn't count on that behaviour for a real project without further research.
Especially if the file is larger than the systems memory. ;)
Going faster - C and multithreading
It might suprise some people but I still write quite a bit of code in C.
Some tasks are simply easier done there than in other languages.
Naturally I also wanted to know how fast the above programs would be when written in C.
Oddly enough threadding is something I find way easier in C than e.g. in Python or Ruby.
No need to worry about the global interpreter lock (aka GIL), garbage collector and so on.
We just let each thread search a slice of the data, e.g. 4 slices for 4 threads.
All previous benchmark results were still quite a bit away from the systems memory bandwith of 13.8 GiByte/s (as reported by
That is a very rough estimate of how fast the CPU can read from and write to RAM.
Maybe multithreadding will help us getting closer to that limit.
Sorry, unfortunately the sky isn't the limit for a CPU. :P
Well, it turns out that i7-8550U can read more than 13.8 GiByte/s. :D Incidently the PC system peaked at 4 threads with 16.8 GiByte/s.
Turns out that if you really need performany C is the way to go. By a wide margin.
With 4 threads the test system could search through 1 GiByte of data in just 65ms.
I've seen a lot of SQL queries that operated on a lot less data and took a lot longer.
But if that makes sense depends on your project and environment (as usual).
The downside is the complexity of the task in C.
You have to be careful to properly slice the data and how to put a zero-terminator at the end of each slice (
strstr() needs one).
Not as easy as writing a hand full of lines in Ruby. But not particularly difficult if you're fit in C.
You can look at c5-threaded-strstr.c for more details or feel free to just ask.
Putting the pedal to the metal
Ok, I didn't.
To utilize the maximum memory bandwith of modern CPUs you have to use vector instructions to fetch data from memory into vector registers.
Not to mention that we can use vector instructions to scan a lot of characters in parallel.
More details about that (and much more) are in Agner's optimization manuals.
But a fair bit of warning, this is pretty advanced and detailed stuff over there. ;)
As much fun as it would be to write an AVX2 assembler version my motivation didn't cary me that far… yet.
Honestly I can envision myself using and maintaining the multithreaded C version in some circumstances.
But an AVX2 version… not very much. But well, who knows what the future will bring. :D
When the data is a tight fit
In all previous runs the entire data easily fit into memory.
I wanted to see how far I could take this until we get limited by the read bandwith of the drive on which the file is stored.
My PC has 15.6 GiByte RAM (as reported by the
free program), so I did a run on it with 100 million lines (14.3 GiByte).
I didn't use the notebook because I was to lazy to generate a 31 GiByte test file. :P
This run only contains some of the programs that can handle files that are larger than the memory.
Either by iterating line-by-line or by explicitly memory mapping the file.
While all the data might still fit into memory I didn't want to risk thrashing.
grep and the search-by-line C program are straight out limited by the read speed of the disk (104 MiByte/s).
Where we previously had hundreds of MiByte/s or even several GiByte/s we're now down to whatever the disk can provide.
Remember, this one chart shows benchmark results from the PC.
There the file was stored a HDD (104 MiByte/s), not an SSD like on the notebook (1.64 GiByte/s).
It starts to get interesting when the first program runs that explicitly uses memory mapping.
The first time it still reads the entire data from disk.
But after that it and all the following programs (also using memory mapping) get the data from the page cache.
I'm not sure why that is. But given that this is a corner case I didn't dig any deeper.
I can't really see myself wanting to use pretty much all of my memory as a cache for a substring search. ;)
A bit of perspective
As much fun as it is to optimize the brute-force approach… at some point it will no longer be the right tool to solve the problem.
If your data is just a few MiBytes in size, or even a few hundred MiBytes, the brute-force appraoch is quite attractive.
You only need to write the file with all the data and execute a
grep command when someone does a seach.
All in all just a few lines of code.
Pretty nice complexity to performance ratio I think :)
Personally pretty much all the projects I did for customers were in that category (e.g. entire database SQL dump of a few hundred MiBytes).
So for me those performance numbers are actually kind of relevant. :o
But sometimes you have a lot more data, and at some point brute-force just isn't viable anymore.
Either because the text file gets to large or you don't want to waste the memory and I/O bandwidth.
For those situations other tools are better suited to get the job done.
Databases have indices for a reason (but I don't know a good index structure for substring search).
In that situation my first approach would be:
Maintain a list of unique words in the searchable data.
Use that for an autocomplete on the client side so the server only ever gets full words (not substrings).
Then use a word based search on the server (a simple Python
dict lookup or a hash table, a text index in SQL, Elasticsearch, whatever).
But again: Only if the naive approach would be to slow or large.
After all a dozzen lines of code are quickly thrown away.
And sometimes the naive approach might carray you further than you think.
In the end it isn't about which solution is the "best", but instead about which solution is a better fit for a given problem in a specific environment.
And I hope this blog post at least somewhat hinted at the boundries of the brute-force search method.
That it showed the complexity-to-performance sweet spot for this approach: When it can be used and when it stops being useful.
If anyone is reading this far: Wow, I'm amazed at your endurence. Take a cookie and enjoy the holidays. :)
A while ago I wanted to plot a simple mathematical function for a project of mine. No problem, there are a lot of good
function plotters out there. Then I stumbled over the polynomial smooth min function by Inigo Quilez. It
was written for GLSL shaders so it uses functions like
float smin( float a, float b, float k )
float h = clamp( 0.5+0.5*(b-a)/k, 0.0, 1.0 );
return mix( b, a, h ) - k*h*(1.0-h);
Ok, getting that into one of those plotters seemed like a hassle. Some of those tools can do amazing things, but I’m a
programmer so I wanted to write the functions in some programming language. I found code snippets and small
libraries to draw function plots into a canvas but nothing simple with panning and zooming. I know there’s insert your
favorite tool here but my Google bubble wasn’t trained for that and I didn’t find it. Of course there are the
heavy-weights like Octave or CindyJS but those didn’t
appeal to me at all.
I think you can guess the rest. :D How hard can it be to write a simple plotter with panning and zooming? So I wrote a
little tool for my own personal needs: js2plot (GitHub project).
plot() to show the graph of the function. You can then
Math functions, and
so on. All in all nothing special but for me it’s more convenient to write mathematical code that way (see smooth min
In case that you have the same problem I hope the tool is useful to you. The plotting part is written as a small
library without dependencies. With it you can turn a
canvas element into a graph with panning and zooming. But you
have to provide a way for users to write the plotting code.
ps.: I only use the tool on desktops and notebooks so I didn’t implement touch panning and zooming. Panning isn’t a
problem but handling to a pinch-zoom still seems like a pain. Reimplementing that gesture isn’t my idea of fun so I
skipped it for now.
A few days ago I implemented a style switcher for my latest project. Unfortunately I ran into a rather obscure Chromium (and WebKit) bug. This post should give a small heads up to anyone running into the same weird effects. I opened an issue for the bug, so please post your comments there.
To switch between styles you just have to disable the current stylesheet and enable another one. I had no trouble with that in Firefox and Internet Explorer. But in Chromium and WebKit enabling the other stylesheet doesn't work the first time. It just stays disabled. This leaves the page unstyled since the first stylesheet gets disabled (disabling works). The bug only happens the first time. When you disable it and enable it again it works.
And that’s also the workaround: Instead of just enabling the other stylesheet you can enable it, disable it and then enable it again. All in one go.
Time to illustrate this with some code. This is the code of a small page usually styled by
blue.css. As soon as the page is loaded the style is switched to
<title>Switch when loaded</title>
<link href="blue.css" rel="stylesheet" title="blue">
<link href="green.css" rel="alternate stylesheet" title="green">
var blue_style = document.querySelector("link[title=blue]");
var green_style = document.querySelector("link[title=green]");
blue_style.disabled = true;
green_style.disabled = false;
green.css is an
alternate stylesheet, meaning it’s not used by default (the browser ignores it). When the page is loaded the styles’
disabled attributes are set accordingly. That attribute is part of the HTMLLinkElement interface. Or you can use the newer StyleSheet interface in which case it would be
green_style.sheet.disabled = false; (same bug applies).
The workaround looks rather silly but is effective never the less:
<title>Switch when loaded</title>
<link href="blue.css" rel="stylesheet" title="blue">
<link href="green.css" rel="alternate stylesheet" title="green">
var blue_style = document.querySelector("link[title=blue]");
var green_style = document.querySelector("link[title=green]");
blue_style.disabled = true;
green_style.disabled = false;
// Workaround for Chromium and WebKit bug (Chromium issue 843887)
green_style.disabled = true;
green_style.disabled = false;
You can try it with this minimalistic standalone style swticher. The styles just change the color of the first paragraph. When it’s black the page is unstyled. This only happens when you load the page and click on "green". Everything else after that or every other combination of clicks works.
Finding the cause of the bug was quite maddening… and I hope this post spares you the same experience.
A few years ago I wrote a small PHP function to send mails via SMTP. Not so long ago a friend asked for it. So I cleaned it up and wrote a bit of documentation: smtp_send. It won’t work for everyone but it’s only about 90 lines of code and you can easily change it to suite your needs.
If you don’t care about dependencies or code footprint you can use one of the many libraries out there. But if you’re tired of PHPs
mail() function, overly complicated APIs or huge amounts of dependencies just to send mails this might be worth a look. The function is meant for people who want to construct the mails themselves. Either for simplicity or because of more complex cases. Then you can use the function to simply send the result.
$message = <<<EOD
From: "Mr. Sender" <email@example.com>
To: "Mr. Receiver" <firstname.lastname@example.org>
Subject: SMTP Test
Date: Thu, 21 Dec 2017 16:01:07 +0200
Content-Type: text/plain; charset=utf-8
Hello there. Just a small test. 😊
End of message.
smtp_send('email@example.com', 'firstname.lastname@example.org', $message,
array('user' => 'sender', 'pass' => 'secret')
I put it online because my friend searched for a simple PHP mailer but couldn’t find one matching the projects requirements. I had the same experience a few years ago. In the end decided to look at the SMTP RFC and write my own little function. With a bit of luck this makes your life easier if you’re caught in same corner cases.
The documentation also contains examples of mails with attachments, alternative content (HTML mail with plain text fallback) or both (alternative content nested in mixed content). For some reason this information seems to be surprisingly hard to find.
I was kind of bored for once and figured I could clean up an old microbenchmark and kick the results out of the door.
This is gonna be another very technical post for programmers. Be prepared. ;)
A while ago I came across a paper about "exokernels", very light-weight operating system kernels that leave most of the work up to libraries (Exokernel: An Operating System Architecture for Application-Level Resource Management).
The paper is from 1995 and among other things they implemented high performance system calls.
I always heard system calls are expensive but I never saw any real numbers or measurements.
Only the stuff the C standard library was doing with
fwrite(): Batching I/O to minimize system calls.
But system calls evolved a lot since the x86 32 bit days and today we have a dedicated
syscall instruction for them.
This got me wondering: How fast are system calls on Linux today compared to normal function calls?
In the paper they compared a function call with no arguments to the
getpid() system call.
That system call just does the minimum of work: Switch into kernel mode, read one variable and return its value.
Sounds simple, so I wrote a set of small microbenchmarks and did the same on my Linux notebook.
Back when reading the paper I just ran the microbenchmarks on my own notebook (an old Intel Core 2 Duo T6600 from 2009).
But right now I'm with my brother and he owns quite a few computers and I asked him to run the microbenchmarks on some of them.
We also run the benchmarks on some older CPUs (see the photo :)).
Anyway, it gave me a nice perspective of how the performance of system calls evolved over time.
Mind you, I just took whatever PC was close by (no idea how fast or slow) as well as a few older CPUs.
I was curious how system call performance changed over the years on x86_64 and don't really care about what CPU is faster or slower.
Don't take these numbers and say that CPU X is faster than CPU Y. It just doesn't make sense.
System calls are one of many factors that make up the performance of a CPU and software but by no means a defining one.
So don't take the numbers to serious. This is just for fun after all. ;)
Before we get to the numbers I have to explain a bit about how Linux optimizes some system calls:
On x86 64 bit system calls are usually made through the
You put the arguments into the proper registers and the
syscall instruction then transitions into kernel mode and calls the proper kernel function.
This incurs some overhead since the CPU has to switch into a different address space and this might need updates to the TLB, etc.
For some system calls the Linux kernel provides optimized versions via the vDSO.
In some cases these optimized versions don't need to transition into kernel space and thus avoid the overhead.
By default the C runtime on Linux (glibc) uses these optimized functions automatically whenever there is one available.
I'm interested in how both of these mechanisms perform.
On to the numbers. Just for a bit of context: Reading a value from the main memory takes about 100ns, see Latency Numbers Every Programmer Should Know.
Syscalls incur quite a heavy overhead compared to normal function calls. That got better with time.
The vDSO implementation of the
getpid() system call is pretty good at mitigating the system call overhead and is almost as fast as a normal function call.
On the Intel Celeron D 341 from 2004 the a system call via the
syscall instruction was about 25 times slower than a system call via the vDSO.
On the Intel Core i7-4790K from 2014 it's only about 12 times slower.
For me I'll use 10 times slower as rule of thumb for modern CPUs and 25 times for older CPUs.
In case you're wondering about the details:
- All benchmarks were run on Linux Mint booted via USB stick (Linux 4.4.0-21-generic)
- The benchmarks execute the function call or
syscall 10 million times in a loop.
- The benchmark is run 10 times. The average of those runs is then divided by 10 million to get the time per function or system call.
- The function call benchmark is compiled with GCC (I also did it by hand in assembler but the result was almost the same).
- The syscall benchmark is programmed by hand in assembler using
- The vDSO benchmark is a small C program compiled with GCC. When you use the "normal" C functions for system calls glibc automatically uses the vDSO.
- I've put the source code and results of the microbenchmarks into a repository. Feel free to look around in case you have any questions. You can also find a lot more graphs there.
fwrite() I/O batching
I was also wondering about the I/O batching the standard C library does with
Is it really worth the effort? Or are that many function calls wasted since system calls can be pretty fast, too?
A few microbenchmarks answered that question, too. This time I measured 1 million 4 byte reads and writes.
Again directly via the
syscall instruction and via the vDSO.
But this time also via
fwrite() instead of using system calls.
Here's what I got:
So, yeah, in this scenario the I/O batching definitely is worth it. It's about 10 to 5 times faster.
But keep in mind, these were 4 byte reads and writes. I took a look with
strace and they got batched into 4 KiByte reads and writes.
So when you're reading or writing a few KiBytes the speedup is likely not that large.
It's also interesting that the vDSO doesn't help much for the
write() system calls. But usually it doesn't hurt either.
So maybe these system calls can't be optimized in that way.
If you're still with me: Thanks for reading. I hope my strange little experiment was interesting. :)
If you've done a bit of graphics programming with OpenGL you're probably familiar with vector and matrix math. Every point or direction in 3D space can be represented as a vector and most manipulations of those can be represented as matrices (moving them around, rotating them, projecting them on the screen, etc.).
I wasn't happy with the C math libraries I found so I wrote a new one (surprise!). Read on if you want to know why. But here is what using it looks like:
mat4_t projection = m4_perspective(60, 800.0 / 600.0, 1, 10);
vec3_t from = vec3(0, 0.5, 2), to = vec3(0, 0, 0), up = vec3(0, 1, 0);
mat4_t transform = m4_look_at(from, to, up);
vec3_t world_space = vec3(1, 1, -1);
mat4_t world_to_screen_space = m4_mul(projection, transform);
vec3_t screen_space = m4_mul_pos(world_to_screen_space, world_space);
OpenGL provides a lot of vector and matrix math for shaders that run on the GPU but on the CPU you have to do it yourself. I like it this way because it allows you to choose what kind of 3D math library you want: One that tries to optimize every last bit of performance but is complicated to use or one that is easy to use but lax on performance. Depending on the project you can pick what you need.
I do pretty much all of my low level graphics programming in C (I know, I'm weird, but OpenGL is a C API after all). And I haven't found a 3D math library I'm happy with for C. I don't know all of them but the ones I've looked at were complicated to use and vague on their semantics. Where they written with the OpenGL or Direct3D conventions in mind? How are matrices laid out in memory? Can I pass them into OpenGL directly or do I have to transpose them before hand?
There are good C++ math libraries that perfectly fit my purpose but I'm an awful C++ programmer. I spend way more time fiddling around with the language than I spend on solving the problem. So I stayed with C. Maybe someone else will find non-templated math code similarly appealing.
All this drove me to write my own small 3D math library for C. Just the basics, nothing fancy. A friend of mine joined in and together we started out from scratch. It was quite a nice learning experience. We spend a lot of time on the whiteboard, calculated a lot of the math by hand and wrote a lot of tests until we understood what the math should actually mean. It was just there that we realized that the perspective divide step in OpenGL is just an fixed function hack to fit a perspective projection into a 4x4 matrix. Kind of pointless with shaders...
Anyway, if you ever do OpenGL stuff in C and need a simple to use math library you can pick it up here: math_3d.h. It's a single header-file library in the style of stb_image.h so it's easy to integrate into projects.
It covers basic 3D vector math, transformation and camera matrices (translation, rotation, scaling and look at), projection matrices (orthographic and perspective) as well as basic matrix math (matrix-matrix, matrix-point and matrix-direction multiplication, inversion of affine transformations). The documentation is in the file itself so take a look if you're interested.
Happy programming. :)
During the last few weeks I wrote MiniDynDNS to build my own dynamic DNS service. Something like DynDNS but all by myself. This post explains the basic steps needed to wire MiniDynDNS into the worldwide DNS system.
I'm using it to create DNS names that point to devices at home I want to access via the internet. This is pretty nice with IPv6 since every device gets its own public IPv6 address. But please make sure only the services you want to have available are actually listening on the public IPv6 address. Or configure your firewalls accordingly.
To build your own DynDNS you'll need a few bits and pieces:
- A server with a static IP address. Here we'll use
203.0.113.17 as a placeholder for that IP.
- A registered domain of your own.
example.com shall proudly serve as a placeholder for that domain.
- Access to the nameserver or DNS records of that domain. We'll need to add some DNS records there.
The bigger picture
The whole idea of this operation is to create a subdomain that is managed by a program running on your server. Here we'll use
dyn.example.com but it can be anything as long as it's a subdomain of your registered domain. Whenever someone on the world resolves a name like
weather.dyn.example.com they're going to ask that program on your server to get the current IP of that name.
For that we first need a program running on your server that can answer DNS requests and allows us to update these IPs
when they change. Obviously we're going to use MiniDynDNS for that. That's why I wrote it.
Second we need to tell the global DNS system that the program running on your server is responsible ("authoritative") for
dyn.example.com subdomain. This is called "delegating" a subdomain. When you bought your own domain you also bought the right to delegate subdomains to whoever you deem worthy. With that in place whenever someone resolves a name in
dyn.example.com they'll ask MiniDynDNS on your server.
Note that you can only delegate a subdomain to a host, e.g.
ns.example.com. This host then has to resolve to
203.0.113.17. You can delegate to whatever host you want but in the end this host has to resolve to your public IP. Here we'll use
ns.example.com as a placeholder for that.
The final part is a script running on whatever device or computer you want to have a dynamic domain name for. That script will periodically report its current IP to your MiniDynDNS.
If everything works correctly you can add any devices you want to your
dyn.example.com subdomain and access them from everywhere on the world.
spidercam.dyn.example.com or whatever. Get creative.
So lets get to it.
1. Run MiniDynDNS on your server
Download or clone MiniDynDNS from GitHub and do the "installation". Basically that's renaming
config.yml and setting the proper values for your setup. The
mail are the important ones.
For MiniDynDNS to answer incoming DNS requests it has to listen on port 53. That's where other servers or clients expect to get DNS requests answered. Changing that will probably break things.
Per default it will use port 80 for a simple HTTP API with which we can update DNS records. In case this port is already used by a webserver like Apache you can change it to something else like 8080. We only need it for the scripts that periodically report the IPs of your devices to the server.
You can tell your server system to start MiniDynDNS on server startup. For me it's just a funny hobby so I leave it running in a
screen terminal. You might also need to tell your servers firewall to open port 53 and 80 for incoming traffic (or whatever port you use for the HTTP interface). Otherwise the firewall will block everything and you'll just get timeouts.
Now your basic DynDNS server should already be up and running. To test it you can fire up a terminal and try this command:
nslookup foo.dyn.example.com 203.0.113.17
nslookup to resolve
foo.dyn.example.com by asking the DNS server
203.0.113.17. If everything works well it should tell you that
foo.dyn.example.com has the IP address
This assumes that you use the default database (just renamed
db.yml). If you already changed your DB you have to change the domain name in the command accordingly.
dyn.example.com to the MiniDynDNS server
Now on to tell the rest of the world that your server manages the
dyn.example.com by itself. For this you have to add two records to your normal nameserver. In my case the company where I registered my domain provides a webinterface for this task.
You have to add these two DNS records to your
dyn NS ns.example.com
ns A 203.0.113.17
The first record tells the DNS system that the
dyn.example.com subdomain is delegated to
ns.example.com. The second record says that
ns.example.com has the IPv4 address
203.0.113.17. Please remember to replace the domain name and the IP with your own values. If your server also has an IPv6 address you should add an AAAA record for that, too.
3. Update your IPs periodically
This differs greatly between IPv4 and IPv6.
With IPv4 only your router has a public IP address. For every service you have to create a port forwarding to the appropriate internal computer or device. So in any way there's only one public IP to report to the DNS server. Many routers already have build-in support for this. Usually it's hidden somewhere in the webinterface called "dynamic DNS", "DynDNS" or something like that.
In the case of my FritzBox router I have to enter the domain (
foo.dyn.example.com), a user name (just
foo), the password (
bar) and an update URL (
http://ns.example.com/?myip=<ipaddr>). The router will replace "<ipaddr>" with its current public IPv4 address. It seems like it then just fires this HTTP request every 30 minutes. Again these values are based on the default
The required steps are probably quite different for other routers. You might even have to look into the manual or search a bit to figure out how to do this.
With IPv6 the situation is a bit simpler. Each device gets its own public IPv6 address. A script that runs every few minutes can simply report that IP to the DNS server. In case of a RaspberryPi running Raspbian this script will do the job:
ip addr show dev eth0 scope global | \
grep --perl-regexp --only-matching '(?<=inet6 )2003:[0-9a-f:]+' | \
head --lines 1 \
curl -s http://foo:email@example.com/?myip=$IP
Again, "foo", "bar" and "ns.example.com" are values from the default
db.yml. Replace them with the values for your setup. In case you changed the port of the webinterface you also have to add a port to the HTTP request (something like
http://foo:firstname.lastname@example.org:8080/?myip=$IP for port 8080).
Save it to
/home/pi/update_ip.sh and make it executable with
chmod u+x update_ip.sh. When you run it (
./update-ip.sh) you should see something like "Your IP has been updated".
To execute the script every 5 minutes you just need to add a line to your crontab. The command
crontab -e should show you an text editor and by adding this line at the end you should be set:
*/5 * * * * /home/pi/update_ip.sh 2>&1 >> /home/pi/update_ip.log
The "*/5" at the beginning means "every 5 minutes". If you want to run the script every 30 minutes its "*/30".
Phew, this was more text than I expected. When you run the command
nslookup foo.dyn.example.com you should now see
192.168.0.1 as a result (again, default database values). Note that this command asks the nameserver provided by your environment (e.g. by your ISP). Thanks to the domain delegation this DNS request should end up at your nameserver which can then answer it. When you have a webserver running on one of your devices you can even use these domain names to access websites.
Anyway, with that setup you should be able to manage your own subdomains. The MiniDynDNS readme has some more information and useful commands so better take a look there, too.
Have fun with MiniDynDNS. :)
During the last few weeks I've been working on a small, simple and self-contained dynamic DNS server: MiniDynDNS. A friend of mine wanted to bring one of his computers online but didn't want to use an 3rd party service like DynDNS.
Well, you probably know the typical hacking answer: "Why not build something like that ourselves?". How hard can it be? After all we only need a hash table or dictionary for a simple DNS server. Throw in some code to parse and generate DNS packets and your're done.
After a bit of hacking and testing it actually worked. An HTTP interface would be nice so I can tell my router to report changed IPs by itself. So in it went.
You can probably imagine that the result was a funny but scary bit of code. I did some cleanup and ended up with about 600 lines of Ruby. Some more refactoring into several classes resulted into a bit to much glue code for my taste so I reverted back to simple procedural style. The code isn't pretty or fancy but it's easier to understand this way I hope.
So if you want to role your own little DynDNS for your private stuff, take a look. Maybe it's what you want. But remember: It's only build for small stuff, not high-traffic sites. :)
Thanks for scrolling down all the way, it can get quite lonely here…
Anyway, looking for older entries? Want to know more? Take a look at the archive.