Product management fundamentals: The next feature fallacy

Joshua Porter writes:

When your product is growing and ramping up new customers, it’s easier to focus on new compelling features that increase engagement.  It’s also easier to ignore dissatisfaction with the increasing base of existing customers because your growth rate exceeds your churn.

Things start to fall apart though when your growth starts slowing down.  It’s easier to focus on new exciting features that you think will turn back the tide and you fall into what is mentioned above.  It makes sense in hindsight–you and your team are used to the pace and cadence that comes with new feature development. The problem though is that the reach of the feature becomes smaller over time. Features that assume a specific level of engagement will, more often than not, fall flat because discovery of the feature will never be 100%.  If you’re lucky, it will reduce churn.  It will not increase growth.

Lifting up the covers and opening up the closet often reveals things like dust bunnies and skeletons.  No one, and I mean no one, likes to work with that stuff, but it’s a necessary part of building a great product or service.

Andrew Chen writes a great response to the tweet entitled, “The Next Feature Fallacy: The fallacy that the next new feature will suddenly make people use your product“.  It’s a great read and I especially like this quote:

How to pick the next feature
Picking the features that bend the curve requires a strong understanding of your user lifecycle.

First and foremost is maximizing the reach of your feature, so it impacts the most people. It’s a good rule of thumb that the best features often focus mostly on non-users and casual users, with the reason that there’s simply many more of them. A small increase in the front of the tragic curve can ripple down benefits to the rest of it. This means the landing page, onboarding sequence, and the initial out-of-box product experience are critical, and usually don’t get enough attention.

It’s a great read.

Concentrate on the things that matter.  Fix the stuff effecting the majority of your customers today.  Get your analytics up and running so that you understand your customer life cycle.  Most importantly of all, make sure that everything you do continues to drive towards the vision you have for your product (and it’s okay to change and pivot if you really have to).


Hackintosh thoughts

File this under the the First-World-Problems Dept.

I have owned and used Apple computers since 1996. Here is the list:

  1. 1996: The first was shared with my brother, an Apple Performa 6400.
  2. 2002: iBook G3 600 MHz
  3. 2007:  15-inch MacBook Pro, Core 2 Duo (Santa Rosa)
  4. 2009: Late-2008, 15-inch MacBook Pro, Unibody

I’m generally happy with my experience, although it hasn’t been a smooth ride…the iBook G3 had a smelly keyboard and a DVD drive that wouldn’t stay closed.  My MacBook Pro Santa Rosa need a power-inverter replacement, fan replacement and the firewire port didn’t work–it was a lemon that Apple graciously replaced with a late-2008 MacBook Pro whose network port failed the first time I plugged it in at Kobo.

Aside: I think it was the network there…from what I know, at least two other late-2008 MacBook Pros were affected.

I still use my MacBook Pro (with upgraded 8GB ram, 256 GB SSD with a 2nd hard drive in the original combo drive slot). I’m amazed that I’ve been able to keep it running this long.

This doesn’t include the slew of work computers that I have had (MacBook Pros, MacBook Airs, etc.).  Two of which exhibited overheating, but I digress–this wasn’t suppose to be a post about my poor experience with Apple Hardware.  I love the stuff.  Nothing, next to Lenovo ThinkPads, come close to the build quality that Apple puts out (but the ThinkPads are butt ugly).

The reason why I am writing this is that I have an itch again to build a new computer.  In Mid-2013, I built an ESXI Whitebox to experiment in some hardware virtualization. I recently pulled that box out of the basement and handed it to my brother because I wasn’t using it.  Recently, I’ve been looking at my Hackintosh and when I do that, I often think of my long history with Apple hardware and software  and the underlying motivation I have to build them rather than just buying a real mac.

In 2009, I convinced Jen that I could build a Mac myself using some Hackintosh guides.  I built a nice Quad Core Q9550 machine.  Three years later, I upgraded my Hackintosh build based on an i5 5370K.  I still use this today in my office as my photo workstation.

Running a Hackintosh is not without its faults.  My video card will freeze and lock up the computer.1  I’ve never bothered to get sleep working (although I know it can work).

It’s more cost effective than buying an iMac if you already have a good monitor, keyboard, etc, but generally more of a pain in the ass to maintain.

After briefly flirting with ESXI on an AMD 8350 build, I’m itching to build another Hackintosh again.  The biggest change in the “scene” is the emergence of Clover EFI Bootloader. Other than that, I see the same issues that I’ve dealt with for the past 6 years:

  • Sound doesn’t work (get a USB sound card…)
  • It won’t boot (check your hardware configuration, boot flags, .kext files)
  • Power management doesn’t work
  • System updates borked the install
  • Facetime and iMessage doesn’t work

All things that are easily troubleshooted–much easier if you use a vanilla-based install from a legitimate Mac.

I don’t think cost is much of a driver anymore in the Hackintosh scene.  Six years ago, Mac hardware was at a significant premium, but the gap has mostly narrowed.  It really comes down to the folks who want a Mac that is more powerful than the Mac Mini, but not tied to a built-in monitor that the iMac has.  Count me as one of those users.

However, it’s 2015 now and even the top-of-the-line Retina iMac is only ~13% faster than the comparable Retina MacBook Pro.  That’s barely above the threshold of noticeability.  In some cases, the iMac performs better than the Mac Pro.

This is in stark contrast to the newest Mac Mini with its max CTO configuration (a dual-core i7) performing at 50% that of the iMac2.  My current Hackintosh, when over-clocked, is only 15% slower in comparison to the latest and greatest.  Not bad for a 3-year old computer.

Mind you, the Hackintosh scene is pretty small (I would say that we’re talking about thousands of people…) and I doubt that Apple will ever do anything to stop people from building them, but you got to wonder if this is even worth it anymore?

Based on what I’m seeing, the only real spot where I see Hackintoshes being relevant is if you do audio engineering or movie editing and you need to supply your own hardware.  There are some use cases for 3d rendering, more so if you are willing to spend for a workstation graphics card.  Alternatively, if you want to explore the platform, but don’t have access to Apple hardware, a Hackintosh is a good option to explore.

I can’t even recommend the dual-boot option.  It’s easier to get separate Windows computer if you want to do some gaming.

Will I build another?  Doubtful. I think I’m past that phase of my life. Should I retire my Hackintosh…maybe. Hard to say whether I go with a Retina MacBook Pro or the new 5K iMac.


  1. I have since rectified this by installing another video card. 
  2. In multi-core benchmarking.  Single core performance difference is negligible. To be honest, I’m kind of disappointed. 

Lightning does strikes twice – Linux and Git

We all know that Linus Torvalds is the father of the Linux kernel. It’s the guts of an Operating System that can be found powering a multitude of devices, from the majority of smartphones and tablets (android), the majority of servers that power the Web, the embedded OS for the Internet of Things (IoT), Smart TVs, Kobo eReaders and even some PCs and Laptops.

What many forget is that Linus is also the father of Git, the most widely used  source control management tool in use today developers all over the world.

While it’s easy to argue that the Linux Kernel will be what Linus is remembered for–there are other players that have also contributed to Linux’s success.  It could be said that the broader impact he has made to computing and development will be Git (which just celebrated it’s 10-year anniversary this week).

In my mind, they are accomplishments of equal scale.  That is just a rarity.  Simply amazing.

HP T610 Plus and pfSense

When I had set up my Watchguard Firebox x550e, I replaced the two 40mm fans with silent models.  I also swapped the PSU with a 90W pico PSU to make a nearly silent system.

One of the replacement fans gave out and starting grinding a few weeks ago, so last week I replaced it with an HP T610 Plus coupled with an Intel i350 T2 dual-gigabit ethernet card.

Default Install pfSense 2.2 and no additional config flags.  It runs at 17-19 Watts idle.

The T610 has 4GB RAM, 16GB MLC SSD, and an embedded 1.6 GHz AMD G-T65N dual-core processor. So it handles all my needs without breaking a sweat.  60 Mbit of VPN traffic (one way) barely even registers.

I haven’t done any iperf tests yet, but it should be equivalent to some of the dual-core Intel Atom boxes people use. So probably ~750 MBit and 150+ MBit over VPN.


Thoughts on 2014 and 2015

So I’m starting to see end-of-year wrap-ups and predictions for 2015.  It’s always good to take a look a back on what is happening in the industry, epecially at Kobo.

Largely, a lot of the stuff I am citing is predicated on Mary Meeker’s “State of the Internet, 2014 ed” that she puts together for KBPC (May 2014). If you haven’t gone through it, I recommend that you do.  It’s a good primer for some of the stuff espoused in the predictions for 2015.

Ben Evans from Andreessen-Horowitz has a great presentation called, “Mobile is eating the world.”  It was released in Oct 2014 and presents an interesting stack of data that basically confirms some of the forward looking trends in MaryMeeker’s report.

The competition for consumers time will become more ferocious.  The rise of messaging platforms is indicative of this–the “sipping” of conversation and attention spans will increasingly favour short-form content and will be a big challenge for Kobo (and our competitors) as we require a magnitude-higher level of engagement to consume our content.  I’m reminded of this NYT op-ed from 2012: “The flight from conversation.”

That said, the quality of short-form content may well be improved in 2015.  Well-written and well-designed content is starting to pop-up, with places like Medium and Quartz leading the way.  These networks will be the ones pushing innovation on the discovery problem and may lead to some interesting applications in the stuff my team design and manages.

On the hardware side, all I see is “sensors, sensors, everywhere”…with the Samsung Galaxy S5 pushing 10 different sensors (Gyro / fingerprint / barometer / hall (recognizes whether cover is open/closed) / RGB ambient light / gesture /heart rate / accelerometer / proximity / compass).  All of this has the potential to be collected, mish-mashed into something useful (not sure what this is… yet).  Is quantified self really a new industry are we navel gazing?

I don’t expect this to change much in 2015, although I feel we’ll be hitting the “trough of of disillusionment” very shortly, as companies will struggle to bring meaning to all the data collected and the diminishing returns/insight/usefulness consumers will see will probably trip some alarm bells with regards to privacy and security.  In addition to all of this, I feel that the industry is really just waiting for what Apple will do.

Steven Sinofsky, of Windows 8 fame, penned an interesting op-ed for re/code:  “Forecast: Workplace trends, choices and technologies for 2015”.  Not necessarily applicable to me at Kobo, but you see the same trends beginning to move into the enterprise space.

While 2014 was a banner year for Kobo, the competition around eReading continued to be fierce. Oyster and Scribd entered the market offering a distinctly different business model. These companies continue to see success with acquiring book rights and continuing to grow their userbase off VC-backed capital. Wattpad continues to operate without any strong competitor in the serialized, self-published space, capturing the next generation of heavy readers and authors. New eReading startups like Aerbooks and Glose are entering the market offering differentiated experiences touting different solutions in order to address the persistent discovery issue of, “What to read next?”

Incumbents did not site idle as well. Google Books has incrementally improved their android experience, offering parity with Apple, Amazon and Kobo offerings and improved non-fiction reading experience. Amazon, with amazing agility, rolled out it’s Kindle Unlimited program to match suit with the likes of Oyster and Scribd. Apple’s bundling of iBooks with iOS8 has further eroded the iOS platform for virtually every eBook retailer on the market. Latest reports indicate that the bundling of iBooks into iOS is adding as many as 1 Million new users a week. Barnes & Noble continue to play in the market, but saw little, if any product updates and I suspect will most likely be a non-factor in 2015.

On a side note: if you haven’t listened to “Serial”—I wholly recommend it.  It’s great story-telling.

Review: Into the Woods Film (2014)

I want to thank Ben and Sara for taking the kids out to Disney on Ice at the Rogers Centre this past Saturday. It’s been a while that Jen and I spent time together alone and we decided to watch Into the Woods at the Don Mills Cineplex VIP1.

Now, I’ve seen a stage production of Into the Woods at the Stratford Festival back in 2005 with Jennifer and Jason and enjoyed that particular staging. It stayed very true to the original 1987 Broadway production with Bernadette Peters as the Witch and Joanna Gleason as the Baker’s Wife.

The Hollywood transfer of the musical was well done.  It enhances some the setting by providing a luscious backdrop for some of the songs (in particular, “Agony” was over the top — total props to Chris Pine and Billy Magnussen for arguably stealing the show).  However, I feel that it missed its mark somewhat when compared to the musical.

Ultimately, I think of the original stage production as a meta-fable, where the moral of the story is that there are consequences to your decisions in the real world.  That how I saw it at least.  I think the film “misses” by underplaying this.  It doesn’t give it time for this tenet to gestate.  The curse is broken in Act 1 and the Baker’s Wife is magically pregnant–cut to the speech at the castle with the prince and his new wife and begin Act II.  There are small, subtle things in the original staging that implies that the characters are not 100% happy.  The strain on the Baker and his Wife’s relationship with their new son, him shirking his responsibilities, Cinderella’s unhappiness with royal-life–all things that add a bit more tension.  The removal of the Giantess’s exposition, really just made her into a B-Movie monster, whereas in the musical, you get to understand how much she has lost due to Jack’s actions. Agony’s reprisal in the 2nd act underscores Prince Charming’s daliance with the Baker’s Wife and Sleeping Beauty.  It makes the emotional betrayal that Cinderella feels even more impactful.

Every decision we make has consequences, both good and bad.  We need to grow up and accept responsibility.  These themes didn’t carry over as well as they did in the musical vs. the film.

There are few other small things that didn’t transfer well from the theatre to film.  Much of the dialogue, especially the pauses, did not transfer over at all in the film.  It made some of the more humorous moments just fall flat. There needed to be an audience to play off of.

That said, some of the changes were very well done.  Anna Kendrick’s scene on the staircase made way more sense as an internal monologue than it did as a conversation with the Baker’s Wife. Agony was over-the-top (but I wish they had done the reprisal performance because the first was so good!). Billy Magnussen showed amazing comedic physicality.  I didn’t miss the elimination Rapunzel’s storyline all that much.   Chris Pine showed remarkable range, from charming to smarmy–and he can sing too.

Meryl Streep as the witch did a good turn (especially since she has to go up against the likes of Bernadette Peters, Vanessa Williams, and Donna Murphy).  Although I wonder if they should have went with Bernadette Peters, who originated the role.

Overall, I walked out of the theatres with a 7.0/10 rating.  After Jen and I had time to dissect it a bit, it’s definitely a 6.5/10 for me. seems to agree with me as well.

  1. For those who don’t know what VIP is, it’s a luxury, adults only, line of cinemas from Cineplex. It’s nice and at a the price premium ($25 a ticket), but you order your food while seated and they bring your order directly to your seats. There is also a lounge area where you can order dinner–so in retrospect, it’s really a 1-stop, movie and dinner experience. 

HP T620 thin client

[UPDATE – 2014/11/26: Made a few updates on the hardware.]

I’ve been fascinated with repurposing PC thin clients.  I like them because they are  virtually silent and very energy efficient1.  I’ve used one for pfSense, and another as an XBMC box (now called a Kodi Media Centre). They can be acquired pretty affordably as organizations that have invested in these boxes usually swap them out at a steady pace (2-3 year leases).

Earlier thin clients were based on more exotic hardware (embedded CPUs from VIA, Cyrix, AMD), but modern clients use embedded SOC versions of mobile x86 CPUs.  We’re talking full-on, dual- and quad-core AMD-based APUs or even full-out Intel Celeron/i3/i5 chips with Intel HD graphics. All this buttoned-up into a custom mini-ITX or mATX form factor with a included DC-to-DC power supply.

I managed to pick up an HP T620 Plus on eBay for less than $200 CAD. This model was released last year and features an embedded AMD “Kabini” processor (GX-420CA), with 4GB Ram, AMD HD8400 graphics + a fireGL 2270 video card.  It’s powered by a 90W pico psu with a heat-pipe CPU cooler and a low-RPM fan. The FireGL card can easily be repurposed for better graphics or networking.  It is virtually identical to the AMD A6-5200 and AMD Athlon 5350 in terms of performance and features and about twice as fast as the AMD e350-based HP T610 thin client that I am using for XBMC.  This should be able to transcode a single 1080p stream in realtime.

A few things to note:

  • Storage is mSATA only. I’ve paired it with 128 GB crucial m500 SSD.

  • UPDATE: There is also a M2 NGFF port available.

  • UPDATE: In addition to the 2 x USB3 and 4 x USB2 ports on the back and the front, there are also 2 x USB headers inside for flash storage, Bluetooth, WiFi, etc.

  • The onboard graphics uses 2, full-sized display ports. This particular model came with a working FireGL 2270 card.  Not very useful and I’ve already removed it.

  • If I use the box for pfSense, I’ll add an Intel GigE dual-NIC

  • I might add a Gigabyte GB WB300D WiFi and BT 4.0 card.

  • The onboard GigE port is no longer Broadcom-based. It’s a cheap Realtek controller (RTL8111/8168/8411 rev C)

  • The PCIe expansion bay only accepts low-profile cards. This is a pretty significant difference from the previous version.

  • 2 serial ports + a Parallel port.

  • The second serial port can be rewired to a VGA connector using a 15-pin VGA header cable.  I am fortunate to have a spare that I tried to add to my Watchguard Firebox x550e box.

  • UPDATE: The VGA connector uses a small 16-pin port that I have never seen before.  I haven’t located a cable yet (best I can fine is a small 12-pin port VGA header cable)

Add some storage (this one had a bad mSATA drive) and a display port to HDMI adapter and you have a complete system that is basically the same as AMD AM1 Athlon 5350 build. For less than $200 CAD, I certainly couldn’t build an off-the-shelf unit for that price.

The bios on these thin clients are very bare-bones.  Don’t expect to over clock the systems as there doesn’t appear to be any means of OC’ing the chips.

This will most likely replace my newish T610-based XBMC computer.  The great thing is that some parts are interchangeable,  I have a spare Bluetooth and WiFi Mini-PCIe adapter from my T610 that I can reuse for instance.  Not too a shabby of system and I’m excited to put it through it’s paces as an XBMC client or as a pfSense router with AES-NI support.

Here is a readout of “lspci”:

It runs pretty cool at full load, a Prime95 Torture test of all four cores maxed only pushed it to 65˚C (23˚C ambient).

  1. 15 to 18 Watts reviews the Kobo H2O

Jordan Shapiro writes 3 Reasons Why Kobo’s Aura H20 is the Perfect Luxury E-Reader:

 Kobo is the quiet Kindle competitor–the underdog in the eReader market. They released their most recent premium eReader at the beginning of October. I’ve been reading on the Aura H2O ever since. I sometimes use my Kindle Paperwhite when I have to read an eBook I bought from Amazon, but I prefer the Aura H20.

I believe this is the first product to have inspired thoughts about French philosophers and epistemological constructs.

Probably won’t be the last.

Hats off to the team for building the [best luxury eReader on the market][2].

Feature Pruning: When and how to kill a product or feature?

Kobo hosted the November 5th, ProductTO MeetUp.  I facilitated a session on “When and how to kill a product / feature?”

The sessions went well.  It was nice to participate in the larger community for once–it’s something that Kobo has never been good at, but I’m going to make a priority for 2014 to host these type of events in the future or at least participate at a personal level.

I often approach feature pruning from the sense of operational expense.  Old features that are not used, or no longer relevant have a cost associated to them:

  • They take up space in the UI. Make the UI more confusing because you have to move it around or add hierarchy.
  • Code quality is reduced. Often times, old features use old programming paradigms.
  • They add weight to the code that you need to carry forward with each release.  You feel that as SDKs update or through regression testing.

It makes the code in the product more brittle.

The same can be said at a macro-level with the company’s product portfolio.

I kind of want to write my own “Spring Cleaning” blog for Kobo; similar to how Google announces it.

However, it takes effort to gracefully remove and prune items.  Effort that could be used to add new and innovative features.  There is also a downside for some of our customers, most likely a minority, who rely on that specific feature that deprecate.  How do you handle the collateral damage when everyone has a twitter or facebook account?

The response that I get internally at Kobo is about 50%.  Now, 100% of the people understand what I am saying, but only 50% respond positively with outright support.  The other 50% are reluctant to accept my approach.

One of the learnings from the session that I facilitated is that I should change my approach. Rather than a subtractive discussion, make it about “additive value”. Spin it up in a positive light by saying that removing this feature or product will allow us to focus on this specific functionality that we know is extremely valuable.

I’m going to give that try.

Setup pfSense as an OpenVPN client for specific devices


[UPDATE – 20141101 – Based on trying to help a redditor with trouble shooting, I actually tried this out on my backup router.  I’ve updated the post.] [UPDATE – 20141103 – Added a note for those using pfSense 2.2 Betas.  There is a bug that  prevents this from working.]

Note: This How-To is meant for pfSense 2.1.x. For those using 2.2 Beta, there is a bug that prevents this from working.  Read about here in the pfSense forum thread, “cannot NAT trough OPT1 interface on multiwan.”  The bug has been filed in redmine and at the time of this writing, it has been fixed for IPv4 traffic.

One of the most powerful features of pfSense is it’s ability to direct your data requests through different end-points using NAT rules. In my case, I like to be able to access the content in Netflix US. In comparison, Netflix Canada’s content is somewhat anemic, although we do get such gems as Community and the Good Wife here. There are many ways to access Netflix US content (and BBC iPlayer content) outside of the geo-fence territories.  I prefer to use a Virtual Private Network (VPN). pfSense is amazing as an OpenVPN client because I can selectively route any device on my network through the VPN service (i.e., my tablets and TV go through US servers, while my smartphone, VoIP, computers go my local ISP).

There are other reasons for using a VPN:

  • Anonymize your traffic to defeat deep-packet inspection used by ISPs to throttle your data.
  • Secure your browsing / network sessions while on a public network (e.g., Coffee Shop’s Wifi).
  • Have your originating IP address  appear to be from anywhere. This is especially useful if you need to do online banking overseas.
  • Access the internet through a consistent Static IP address.
  • Unblock sites that are geo-fenced (like Netflix US or

In this particular case, I am using the VPN to tunnel my Internet traffic through to a server located in the United States. This VPN server acts as a “proxy” or “end-point” for all my HTTP requests. For websites on the receiving-end of your request, I appear to be in the country that the VPN server is residing (in this case the US). I prefer VPNs because I can visit other sites (not necessarily just Netflix USA) and see the local experience on both your Desktop and device. OpenVPN provides the most secure means of doing this.  The provider that I’ve chosen is StrongVPN (although I use others) as they have:

  • Dedicated, statically-assigned IP-address (when connecting over OpenVPN)
  • A proven track record of not overselling
  • Apps for all major platforms

Things that StrongVPN does NOT offer:

One of the reasons I prefer a consistent, statically assigned IP address, is that I can guarantee access to specific servers and what not through IP Whitelisting. Although, that’s another topic.

I like using pfSense because I can set it as an OpenVPN client and use the router to offload the encryption handling (currently an upgraded Watchguard x550e). By setting up the OpenVPN client as a gateway, I effectively negate the load on the device connecting to the Internet through the VPN. Having it at the router level also means I can share the connection with multiple devices connected to my wireless or wired network.  Having a 2.0 GHz Pentium-M based router means I can easily max out my 45/4 Mbps cable connection when going through the VPN1.

I can also use NAT-based rules to select which devices use the VPN connection or which bypasses the VPN all together and access the Internet through the default WAN provided by my ISP.  For instance, my VoIP ATA connects to directly because I don’t want to add latency to the connection by going through StrongVPN’s server in NYC.

NOTE: This probably works with IPSec, PPTP and L2TP, but YMMV.

How To

Get a VPN account, select a fast server, and download the OpenVPN configuration file

  1. Setup an account with StrongVPN (or any other VPN provider).
  2. Select an appropriate package based on your location.  Most VPN packages usually offer a discounted package for an annual-fee (best value).  Ensure that it has the locations that you are interested in and that the package offers OpenVPN support.
  3. Sign into StrongVPN and use their tools to select a server in the country that you would like to route your data through.  They have speed tests that I found were useful.
  4. Go to the “Setup Instructions page” > “Manual Setup – All other devices” and download the OpenVPN config file (for PC and Mac)
  5. Open the vpn-inXXX_ovpnXXX_account.ovpn in a text editor.  You’ll use this data to setup the connection in pfSense.

What is this *.ovpn file?

I won’t get into the technicals of public key encryption and what a certificate authority is and what certificates do.

The *.ovpn file is a configuration file. It is divided into 5 sections:

  1. IP addresses for the VPN server that you want to connect to and the default UDP ports required.
  2. A list of configuration flags that you will use to optimized the connection in pfSense.
  3. The certificate for your Certificate Authority (CA).  It begins with <ca> and ends with </ca>. It looks something like this:
  4. You’ll have another section that contains your private.key. It starts with <key> and ends with </key>. It looks like this:
  5. You’ll then have your VPN certificate.  It’s defined by the <cert> </cert> tags.
  6. Finally, you’ll have your OpenVPN Static Key.  It starts with <tls-auth> and ends with </tls-auth>.

Enter your Certificates into pfSense

NOTE: I am using pfSense 2.1.5.

You’ll need to add your Certificate Authority, OpenVPN certificate and private key data into pfSense.  It’s just copy and pasting.

  1. Go to “System” > “Cert Manager”
  2. You will see three tabs:
    1. CAs
    2. Certificates
    3. Certificate Revocation
  3. In the CAs tab, click the “+” icon to add a new certificate Authority
    1. Provide a name like “<VPN PROVIDER> CA”
    2. Copy and paste the <ca> section from the .ovpn file. NOTE: do NOT include the <CA> and </CA> tags.
    3. It should look like this:
  4. Click “Save”.
  5. Go to the “Certificates” Tab and click the “+” icon to add your VPN certificate and private key.
    1. Provide a name like “<VPN PROVIDER> CERT”
    2. Copy and paste the <cert> section from the .ovpn file into the “Certificate data” text box. NOTE: do NOT include the <cert> and </cert> tags.
    3. Copy and paste your the <key> section from the .ovpn file into the “Private key data” text box. NOTE: do NOT include the <key> and </key> tags.
    4. It should look like this:
  6. Click “Save”.

Configure your OpenVPN Client

You’ll need to configure pfSense to act as the OpenVPN client.

  1. Go to “VPN” > “OpenVPN”
  2. You’ll see 4 tabs:
    1. Server – Makes your pfSense router into a server.
    2. Client – connect your router to an OpenVPN server. <– You want this tab
    3. Client Specific Overrides – Allows you to set special directives that change the behaviour of the client you are connected to.  For instance, you force the OpenVPN client to send out Google DNS servers.
    4. Wizards – Helpful step-by-step tutorial to set things up.
  3. Click the “Client” tab
  4. Click the “+” icon to add a new client.
  5. You’ll be required to enter your static key and use the details from sections 1 and 2 from the .ovpn file to configure it.  Use the image below as a guide.
    1. NOTE: This is very specific to StrongVPN.  You will need to experiment with the settings given to you by your VPN provider.
    2. You’ll need to copy your OpenVPN Static Key into the TLS Authentication text box. Note: remember leave out the <tls-auth> and </tls-auth> tags.
    3. Strong VPN offers several Ports to connect with. I specify the first, port 4672, type UDP.
    4. In the “Peer Certificate Authority” dropdown, select the “<VPN PROVIDER> CA” certificate authority you made above.
    5. In the “Client Certificate” dropdown, select the “<VPN PROVIDER> Cert” you made.
    6. Set the Encryption Algorithm based on the option available to you in the .ovpn file.
    7. Depending on your hardware, you should select whether you have hardware crypto acceleration (e.g., Via Nano, AMD Geode, Hifen, or AES-NI capable CPU).
    8. In the advanced configuration text box, you’ll enter the items from section 2 of the .ovpn file. Experiment with what works.  You’ll see errors in the log files if an attribute doesn’t work. This is what I use:

      verb 4;tun-mtu 1500;fragment 1390;mssfix 1390;keysize 128;key-direction 1;redirect-gateway def1;persist-tun;persist-key;route-delay 2;explicit-exit-notify 2;comp-lzo yes;

  6. Provide a name and click “Save”.

Check your VPN logs now!

You’ll want to see if you can successfully connect with your service provider through the system logs.

  1. Go to “Status” > “System Logs”
  2. Select the “OpenVPN” Tab
  3. Verify that you have successfully connected. Specifically you want to see, “Initialization Sequence Completed”.

If you don’t see it, it means you are not connected.  Check your configuration again. Use the log to look for errors.  These are probably flags in your advance settings. Double check that you pasted in the right TLS Authentication key.

Time to set up our OpenVPN gateway interface

If you’ve gotten this far, congratulations.  Now all you need to do is setup pfSense to route traffic through the dedicated VPN tunnel we’ve just created.  What we’re going to do set up the tunnel as a gateway interface and then route traffic based on IP address using firewall rules.

  1.  Go to “Interfaces” > “(assign)”.
  2. Assign click the “+” icon and add a new interface. It will be called “OPT1” if you don’t already have it.
  3. In the “Network Port” dropdown, select  “ovpnc1 <VPN PROVIDER>”.  This is a virtual network port for you to send data through.
  4. Now change the name of OPT1 into something more useful.
    1. Click the “OPT1” hyperlink on the left side.
    2. Provide a descriptive name.
    3. Click “Save”
  5. It should look like this:

TROUBLE SHOOTING: Verify that you have working gateways

When I tried configuring a spare box, I ran into trouble getting this tutorial to work on a fresh install of

Verify that you are getting an IP address in the pfSense homepage.

  1. Click the pfSense logo in the top, left-hand corner.
  2. Verify that you have an IP Address for your VPN.
  3. If no, go to “Status” > “Services”
  4. Restart the OpenVPN service by clicking the stop button, waiting, and then the play button.

Verify that your gateways are available in “System” > “Routing”

  1. Go to “System” > “Routing”
  2. In the “Gateway” Tab, You should see 4Gateways:
    1. WAN IPv4 with an XXX.XXX.XXX.XXX IP Address
    2. WAN IPv6 with a hexadecimal IP Address
    3. StrongVPN IPv4 with a ZZZ.ZZZ.ZZZ.ZZZ IP Address
    4. StrongVPN IPv6 with either “dynamic” or a hexadecimal IP Address

It should look like this:

If no IP Addresses are there.  Open the StrongVPN entries, scroll down and click, “Save”.  That seemed to restart it for me.

Set your Outbound NAT rules to Manual Generation

You will need to know the IP address of the device you are using.  I set up static mappings for my own devices, but it’s not really necessary because most home networks don’t really need this.

  1. Go to “Firewall” > “NAT”.
  2. Select the “Outbound” tab.
  3. Select the “Manual Outbound NAT Rule Generation (AON) radio button.
  4. Click “Save”
  5. You’ll see a list of interfaces that look like the picture below:

TROUBLE SHOOTING: Only 3 entries for Outbound NAT rules, not 6

You should see 6 entries (like above) when you set your system to “Manual Outbound NAT rule generation).  However, when I tried doing this for a fresh Install of 2.1.5, I was only given 3 NAT entries for WAN.2  Since your VPN is another gateway, you should have an additional 3 (as depicted above).  In the case that you don’t see it.  Verify that the gateway is there with an IP address by going to “System” > “Routing”.

If the gateway is there, then you need to create the proper WAN rules.

  1. Make a copy of the first WAN Rule that says, “Auto created rule for ISAKMP – LAN to WAN”, click the “+” button beside it.
  2. In the “Interface” dropdown.  Select “<VPN PROVIDER>”.
  3. Change the name to “Auto created rule for ISAKMP – LAN to <VPN NAME>”
  4. Repeat this for the next 2 WAN rules.
  5. Position the rules as seen in the image above.

You want to duplicate all the rules so that the VPN has proper NAT directions.

Create firewall rules for your devices

You’ll need to create rules for StrongVPN and OpenVPN tabs under “Firewall” > “Rules”.  After that all you need to do is specify the IPs of which devices you want to send through the VPN. The last rule you create is a blanket rule that directs all other non-specific devices through WAN (rather than the VPN).

Note: I am making an assumption that most traffic goes through your ISP and not your VPN.

  1. Go to “Firewall” > “Rules”.
  2. Select the “<VPN PROVIDER>” tab
  3. Click the “+” icon to add a new rule.
  4. Create a “Pass” action for all IPV4 traffic through the “<VPN Provider>” Interface.
  5. It should look like this:
  6. Click “Save”
  7. Click “Apply Changes”
  8. Select the “OpenVPN” tab.
  9. Create a “Pass” action for all IPv4 traffic through the “OpenVPN” Interface.
  10. In the “Advanced features” > “Gateway” dropdown,  select your “<VPN Provider>”.
  11. It should look like this:
  12. Provide a descriptive name and click “Save”
  13. Click “Apply Changes”.

Now it’s time to select your devices.  You’ll need to know their IP address.

  1. Go to “Firewall” > “Rules”.
  2. Select the “Lan” tab.
  3. Click the “+” icon the add a rule.
  4. Create a “Pass” action for the device
    1. Set “Action” dropdown to “Pass”
    2. Set Interface to “LAN”
    3. TCP/IP Version to “IPv4”
    4. Protocol to “Any”
    5. Source: Set to “Single host or alias” and provide the IP address or “alias name”
    6. Provide a descriptive Name
    7. In “Advance features” > “Gateway”, select the gateway you want to use:
      1. “WAN” for your ISP, or
      2. “VPN” to route traffic through OpenVPN.
    8. Click “Save”
  5. It should look like this:
  6. Click “Apply All”

Repeat for any device (Tablet, SmartTV, XBox, Hackintosh etc.)

Create a rule for non-specific devices

Finally, the last rule that you need to make is to specify all other devices in your Lan to use the default WAN.

  1. Go to “Firewall” > “Rules”.
  2. Select the “Lan” tab.
  3. Click the “+” icon the add a rule.
  4. Create a “Pass” action for the device
    1. Set “Action” dropdown to “Pass”
    2. Set Interface to “LAN”
    3. TCP/IP Version to “IPv4”
    4. Protocol to “Any”
    5. Source: Set type to “LAN Net”
    6. Provide a descriptive Name like “DEFAULT REST OF LAN TO WAN
    7. In “Advance features” > “Gateway”, select the “WAN_DHCP – XXX.XXX.XXX.XXX”
  5. Click “Save”
  6. It should look like this:
  7. Click “Apply All”

Ensuring rules are applied in the proper order

In order to ensure that the rules are applied in the proper order, you’ll need to move the items up and down the list in the “LAN” tab under the “Firewall > Rules” section of pfSense.

Make sure that all the rules are above the line in red. Device specific overrides are at the top with the non-specific devices the last rule above the red line.

Use this image to help out:

Make sure to apply the changes and let the firewall rules process.

You can verify your external IP address by visiting StrongVPN’s website and look at the IP and country of origin.

Hope you found this useful.

NOTE: FWIW, I think you could accomplish this through VLANs.


  1. Provided by
  2. I suspect it is because my VPN gateways were not registered yet.