After close to 6 years with SGH Capital, I’m pleased to announce I’ve joined OSS Ventures to scale up their VC arm and operations. I’ll be forever grateful for my time at SGH: from Entrepreneur in Residence to Partner, the learning curve has been incredible, and it now felt like the right time to hone in on what I like and understand best – B2B SaaS.
Founded in 2018, OSS is a hyper-focused venture builder and investor tackling the future of operations and manufacturing. Even though the industrial sector (including construction) represents ~27% of the world’s GDP1, it only attracts ~3% of VC funding2!
Climate change, geopolitical tensions, the energy crisis, inflation and supply-chain woes have magnified the weaknesses of our production and consumption models. To turn the tide, world leaders are undertaking massive investments, such as the Inflation Reduction Act, the CHIPS Act, or the Critical Raw Materials Act. Meanwhile, swaths of VCs are fighting for the hottest artificial intelligence deals but our factories and critical infrastructure rely on equipment that won’t be upgraded for another decade or two.
OSS leverages a wide network of industrial partners to build and invest in the modern factory stack. In the last years, we have incubated and funded a dozen startups, whose solutions are used in over 1.000 factories. Our companies help manufacturers unlock tangible operational efficiencies and compound the know-how of their employees. These solutions fill information gaps, replace paper forms, disjointed spreadsheets and antiquated software so that not just white-collar but also blue-collar and deskless workers can work smarter instead of harder.
OSS also invests ahead of or alongside top-tier VCs in extraordinary founders who want to build enduring businesses in that field in the US and Europe. We partner from pre-seed through Series B, and our hands-on operating partners help entrepreneurs scale their teams and revenues and set them up for success.
We are absolutely convinced that the next crop of billion-dollar companies will include several industrial SaaS platforms because that’s what our portfolio growth and numbers hint at.
If you are working on something in that space, please reach out!
]]>Besides your primary domain, it is also crucial to properly configure any parked domains you might have. Companies will most often register similar domains across multiple TLDs to ensure that malicious actors cannot set up misleading websites or mess with their online presence.
However, simply owning the domain is not good enough!
If you do not set up proper SPF and DMARC records at the DNS level, anyone can easily send spoof emails from your parked domain: domain registrars may not preemptively set such records for you. Alas, spoofed emails could look very convincing, especially if the parked domain resembles your primary domain or brand.
The good news is that you just need two TXT records on each domain:
TXT
record with the value "v=spf1 -all"
(mind the double quotes).TXT
record with the value v=DMARC1; p=reject; rua=YOUR_REPORTING_URL; pct=100;
.In my case, I manage all my DNS zones with Cloudflare, so I wrote the following bash script, which uses flarectl
(which assumes there are no existing TXT records):
export CF_API_TOKEN=YOUR_API_KEY
for zone in parked-domain1.com parked-domain2.com parked-domain3.com; do
flarectl dns create --zone="$zone" --name="$zone" --type="TXT" --content="\"v=spf1 -all\""
flarectl dns create --zone="$zone" --name="_dmarc.$zone" --type="TXT" --content="v=DMARC1; p=reject; rua=YOUR_REPORTING_URL; pct=100;"
done
If you are not collecting DMARC reports, you can safely remove the rua
directive.
For 69.48 Euros (annually), they promise that they’ll make data brokers remove your data so that it “stays secure and private”. But is it worth it?
In theory, as soon as you sign up, they send requests to all of the data brokers they have on file. Brokers then have one calendar month (GDPR) or 45 days (CCPA) to comply.
Brokers acting in bad faith could argue that the request is complex, which buys them two more calendar months. But after three months, you should definitely have confirmation that your data has been removed from their systems (or was never on it in the first place).
Have a look at my dashboard below:
After nearly four months, the completion rate is 36%. The rest of the requests are still “In Progress”.
I don’t think this completion rate is a success. Do I have confidence that Incogni is chasing the remaining brokers? I can’t tell.
Their dashboard ought to be designed to provide more insights into the process. Clicking on a broker in the list of requests does not pull up any details about the request itself but rather a general description of their business and Incogni’s assessment of the sensitivity of the data they may hold.
Instead, I wish they provided a simple timeline with key events such as “Request Sent”, “Action Required”, “Follow Up Sent”, “Request Completed”, etc.
Last September, their support team said this was “valuable feedback” but I didn’t notice any improvements to the dashboard.
Since then, I have discovered Data Brokers Watch, a very comprehensive database (over 900 brokers!), curated by a non-profit. They also enable you to request the deletion or a copy of your data (albeit one broker at a time).
You should go through their top 10 brokers, especially if you live in the US. It will only take you a couple of minutes.
As far as Incogni is concerned, more than 69 Euros per year is likely needed to go after hundreds of brokers properly. If you are unsure about signing up, it’s all about having the right expectations and knowing that they will only send a bunch of automated emails on your behalf.
Lastly, it’s absurd that Incogni, a privacy service, uses Google Analytics and Google Tag Manager. They should know better!
Update (Jan. 02, 2023): Incogni sends weekly “Progress Reports” emails. On my Dec. 31, 2022 report, the number of requests sent actually decreased from 93 a month ago to 89, which raises questions about Incogni’s accuracy.
]]>Overall, Netlify is a wonderful solution and I expect I’ll continue to use it for other projects.
This blog is a very simple sandbox: it’s pure Jekyll and doesn’t use any serverless functions, making it trivial to move from one provider to another. I didn’t need to change, but I was curious – knowing that it would require almost zero configuration or fixing.
Plus, Cloudflare and Netlify both have the relevant features for this site:
_headers
file and syntax.Even though Cloudflare has improved build times already, they are still very much behind Netlify in my experience: my last commit took 18s to build and deploy on Netlify vs 3m 16s on Cloudflare! A shocking ten times slower!
Cloudflare Pages also lacks build/deploy notifications, which is a big downer.
If you are using the JAMstack to its fullest potential, Netlify is the better option, without a doubt. It’s not worth migrating unless you want to leverage other Cloudflare products like R2 and KV Store.
In my case, I’ll continue to use this blog as a guinea pig to see how the product grows.
]]>Since my setup is quite simple and based on the community feedback, I didn’t feel the need to wait for potential bugs to be fixed and went ahead.
The title of this post is self-explanatory, but this information might be helpful to UDM users who are unsure about upgrading to v3: upgrading to v3.0.13 breaks NextDNS.
In my case, after upgrading, I realized that my devices went back to using my ISP DNS and that custom names were no longer resolving.
Fixing this issue takes two minutes: re-install the CLI client, confirm the settings and you should be good to go.
As far as I can tell, there are no other issues with NextDNS and UniFi OS v3 at this time.
]]>TL;DR: If you rely on the qpress
file archiver, you should update it ASAP.
On August 19th, 2022, Otto Kekalainen and Mikhail Chalov from AWS reached out by email to let me know they had found and fixed a directory traversal vulnerability in the qpress
file archiver.
Traversals are a big no-no, especially in production environments. On top of that, Percona and MariaDB rely on qpress
to perform database backups since it can compress large amounts of data very quickly, meaning that it’s bound to be installed on sensitive hosts.
Unfortunately, the project upstream is dead - which prompted me to fork it in the first place. As of this writing, the project homepage no longer loads.
Mikhail’s pull request is available here, with step-by-step instructions to reproduce the issue (which requires a malicious payload) if you are interested.
If you installed the qpress
archiver, either from the original source or an older version of my fork, you should build a fresh binary using the 20220819
tag (or later) of my fork, which includes Mikhail’s fix.
If you installed qpress
from a Linux repo, as far as I can tell, these are still using the original unpatched 2010 source. You should replace your executable with a freshly built binary which includes the patch.
As soon as I realized my RoboRock was a capable quad-core computer running Ubuntu Trusty, I wanted to look at the firmware first-hand.
This post is not about “jailbreaking” the S5 - which has been covered elsewhere. Instead, I will be sharing the steps you can use to get your copy of the firmware so that you can review and decompile the scripts and binaries used to provision and run the robot.
1) Get a copy of the firmware file. See here if you want to download a different version.
wget https://cdn.cnbj2.fds.api.mi-img.com/rubys/updpkg/v11_002034.fullos.55915876-2190-407a-9fcb-f1e760d9b623.pkg
2) Decrypt the firmware file (use rockrobo
when prompted for a decryption key):
ccrypt -d v11_002034.fullos.55915876-2190-407a-9fcb-f1e760d9b623.pkg
NB: Newer robots use a different encryption mechanism.
3) The decrypted “pkg” is actually a gzip archive which contains a disk.img
, so we’ll decompress it:
tar zxvf v11_002034.fullos.55915876-2190-407a-9fcb-f1e760d9b623.pkg
4) Let’s find out more about disk.img
with the file
command:
disk.img: Linux rev 1.0 ext4 filesystem data, UUID=c3a11fc8-0afb-4909-948f-f764e532f7a6, volume name "rootfs" (extents) (huge files)
5) It’s time to mount this image:
sudo mount -o loop disk.img /mnt
NB: This command will fail on Mac OS (no native support of ext4 or loop devices). Ubuntu in a VM will do.
6) You can now freely inspect the firmware! To go to the main folder, do:
cd /mnt/opt/rockrobo
You can preview the full rockrobo/
file tree on this GitHub gist (26 directories, 738 files).
From cloc
:
--------------------------------------------------------------------------------
Language files blank comment code
--------------------------------------------------------------------------------
Bourne Shell 11 154 38 1044
Perl 1 19 2 110
Bourne Again Shell 1 1 0 21
--------------------------------------------------------------------------------
SUM: 13 174 40 1175
--------------------------------------------------------------------------------
That’s a lot of bash scripts! It also turns out that they perform critical operations, but I’ll keep that and other fun facts for a future post.
]]>DMARC (Domain-based Message Authentication, Reporting and Conformance) is an email authentication protocol. It is designed to give email domain owners the ability to protect their domain from unauthorized use, commonly known as email spoofing. The purpose and primary outcome of implementing DMARC is to protect a domain from being used in business email compromise attacks, phishing email, email scams and other cyber threat activities.1
This post will not explain how to set up a DMARC policy on your domain. Google has a great guide to get you started. If you want to nerd out, RFC7489 has you covered.
Instead, I want to share my experience, which, hopefully, will convince you to roll out your own DMARC policy. Email spoofing is everywhere and unless you have the right DMARC policy in place, you can’t see and combat it.
I work for a small early-stage venture capital firm. We don’t get much media attention because we usually invest alongside larger funds and don’t write eye-popping checks. But we still manage quite a bit of money and handle sensitive and generally confidential information. Some of that information needs to be shared externally with our investors, lawyers, auditors, accountants, banks, etc., which happens over email 99% of the time. Hackers know this and will play the long game to steal large sums of money.
We rolled out SPF (Sender Policy Framework) and DKIM (Domain Keys Identified Mail) years ago. See this quick recap if you are unsure how they tie into DMARC.
Meanwhile, the quality of some of the spam and phishing we were receiving continued to improve, some of which was very well done and quite deceiving (fake capital call notices, fake shared folders, etc., with many pretending to come from our domain). Malicious actors were trying to leverage our brand/domain, likely to steal credentials or spread ransomware.
But you don’t have to be a financial institution to be a target. For example, eBay, Deliveroo and Netflix all have strict DMARC policies2. Hypothetically, a forged transactional email could lead to an account takeover for example. With the right DMARC policy in place, a forged email is less likely to get to the recipient’s inbox.
So we deployed a basic, report-only, DMARC policy and used Report URI to establish a baseline. A few weeks later, the stats showed dozens of unknown senders in odd countries. Then, we changed the policy to quarantine
and continued to monitor. Finally, we updated the policy to reject
100% of the messages that failed DMARC checks.
We’ve had this setup for over two years now. On average, we see 30-50 DMARC rejects a month from all over the world. For instance, last month’s unauthorized senders came from Morocco, Pakistan, Vietnam, Serbia, and Puerto Rico. These numbers would undoubtedly be orders of magnitude higher if we were a more prominent firm.
Go and get a robust DMARC policy set up!
]]>I maintain one static website, built with webpack and deployed on Netlify, which was an easy candidate to see if node v18 introduced any bugs in our build and deploy process.
In the Netlify site settings, I changed the NODE_VERSION
environment variable to 18
and triggered a deploy, which failed. Here’s the log:
3:00:43 PM: Build ready to start
3:00:51 PM: build-image version: ac716c5be7f79fe384a0f3759e8ef612cb821a37 (xenial)
3:00:51 PM: build-image tag: v3.13.0
3:00:51 PM: buildbot version: e58b6be665675c0f99b33132a8c1eec1f775eba1
3:00:51 PM: Building without cache
3:00:51 PM: Starting to prepare the repo for build
3:00:51 PM: No cached dependencies found. Cloning fresh repo
3:00:51 PM: git clone [REDACTED]
3:00:54 PM: Preparing Git Reference refs/heads/master
3:00:55 PM: Parsing package.json dependencies
3:00:56 PM: Starting build script
3:00:56 PM: Installing dependencies
3:00:56 PM: Python version set to 2.7
3:00:57 PM: Downloading and installing node v18.0.0...
3:00:57 PM: Downloading https://nodejs.org/dist/v18.0.0/node-v18.0.0-linux-x64.tar.xz...
3:00:58 PM: Computing checksum with sha256sum
3:00:58 PM: Checksums matched!
3:01:00 PM: node: /lib/x86_64-linux-gnu/libm.so.6: version `GLIBC_2.27' not found (required by node)
node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by node)
node: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.28' not found (required by node)
nvm is not compatible with the npm config "prefix" option: currently set to ""
3:01:00 PM: Run `nvm use --delete-prefix v18.0.0` to unset it.
3:01:00 PM: Failed to install node version '18'
3:01:00 PM: Build was terminated: Build script returned non-zero exit code: 1
3:01:01 PM: Creating deploy upload records
3:01:01 PM: Failing build: Failed to build site
3:01:01 PM: Failed during stage 'building site': Build script returned non-zero exit code: 1 (https://ntl.fyi/exit-code-1)
3:01:01 PM: Finished processing build request in 10.203136084s
Clearing the cache and retrying the deploy yielded the same result.
Since Netlify uses nvm
to manage node versions, I wondered what was wrong with the build environment. As can be seen in the logs, the build environment was still running Ubuntu Xenial (16.04), which is no longer actively maintained.
Thankfully, Netlify allows you to select Ubuntu Focal (20.04). To do so, navigate to Site settings > Build & deploy > Continuous Deployment > Build image selection.
I cleared the cache and it built and deployed perfectly this time.
Sometimes the build environment needs an update too!
]]>According to many mainstream media publications and the marketing content of countless businesses, a single email is estimated to represent up to 50 (!) gCO2e. On top of that, we should thoroughly clean our inboxes to minimize the footprint caused by energy-intensive servers used to store our correspondence. Yet, in actual terms, the marginal impact of sending one email is essentially zero.
Similar claims continue to exist about the alleged carbon impact of streaming video, regardless of excellent research that shows that those figures are vastly exaggerated. Netflix themselves shared an independent whitepaper from The Carbon Trust in June 2021 - one of the key findings reads as follows:
The average carbon footprint of one hour of streaming in Europe is approximately 55 gCO2e (grams of carbon dioxide equivalents). That’s about the same as microwaving four bags of popcorn, or three boils in an electric kettle in the UK. Previous guesswork profiled in the media had this figure as high as 3200 gCO2e, or as much as microwaving 200 bags of popcorn. So quite a difference!1
Consider the following chart from the International Energy Agency:
Even though global internet usage is growing at an exponential rate, the energy consumption of data centres remained flat thanks to rapid improvements in energy efficiency. In addition, data centres worldwide only consumed around 200 TWh in 2018, or about 1% of global electricity use2.
If you want to cut down on your emissions, keeping your devices for longer or buying refurbished seems much more effective: an iPhone 13 Pro is estimated to represent 69 kg CO2e over its life time3. That’s over 1.200 hours of streaming4.
Netflix (2021), The True Climate Impact of Streaming ↩
IEA (2019), Data centres and energy – from global headlines to local headaches? ↩
Apple (2021), iPhone 13 Pro Product Environmental Report ↩
Based on 55 gCO2e per hour, as per The Carbon Trust (2021), Carbon impact of video streaming. ↩