How we work
What an engagement looks like, what we expect from you, what you should expect from us, and what we deliberately don't do.
The first 90 days
Most fractional CTO engagements start the same way: someone at board level has realized the organization's technology isn't keeping up with the business. Maybe the IT team is competent at keeping the lights on but nobody's thinking strategically. Maybe vendor costs have been climbing for three years and nobody can explain why. Maybe an acquisition is on the horizon and nobody knows what's actually running underneath the target company. Maybe security has become a boardroom topic and the honest answer is that nobody's confident in the posture.
Whatever the trigger, here's what the first ninety days typically look like.
Weeks 1–3: Discovery
We talk to everyone who touches technology. The IT team, obviously. But also the department heads who use it, the finance director who pays for it, the operations director whose team works around its limitations, and the board members who worry about it without fully understanding it. We're not just auditing systems. We're mapping the gap between what the business needs technology to do and what it's actually doing.
In parallel, we audit the technology estate. Every system, every contract, every vendor relationship, every undocumented workaround that someone built three years ago and forgot about. We review the network architecture, the security posture, the backup and disaster recovery arrangements, the licensing agreements, the carrier contracts. We're looking for the things nobody told the board about: the critical database running on an end-of-life operating system, the security policy that exists in a document but not in practice, the three overlapping Microsoft licensing agreements signed by different people in different countries, the "redundant" internet circuits that share a physical path for the last mile.
Discovery also includes a review of the IT team itself. Do they have the skills and resources to execute what the business needs? Are they structured sensibly? Are they spending their time on strategic work or firefighting? In many organizations, capable IT people are trapped in a cycle of reactive support because nobody has ever prioritized the proactive work that would reduce the support burden. That's a structural problem, not a people problem, and it's fixable.
Weeks 4–6: Quick wins and honest assessment
There are always quick wins. A contract renegotiation that saves real money because the current terms were set five years ago and nobody's benchmarked since. A security gap that's been sitting open for months because it was too technical for the board to prioritize and too political for the IT team to escalate. A system migration that the IT team wanted to do but couldn't get approved because nobody senior enough was championing it.
We find those and execute them, because nothing builds credibility faster than solving a problem someone's been complaining about for a year. It also gives the board early evidence that the engagement is producing results, which matters when you're asking them to invest in a longer-term technology transformation.
At the same time, we prepare an honest assessment for the board. Not a two-hundred-page report that nobody reads (a clear, jargon-free picture of where the technology estate is, what's working, what's a liability, and what needs to change. We grade everything red, amber, or green. Not because boards love traffic lights, but because forcing a binary judgment on every system and process reveals the ones you've been politely ignoring. The items in amber are usually the most important) they're the ones that are "fine for now" but decaying toward failure.
Weeks 7–12: Strategy and roadmap
By this point we know the business, we know the technology, and we've established credibility by fixing real problems. Now we build the roadmap: what changes in the next quarter, what changes in the next year, what's a three-year play. Every item has a cost estimate, a risk assessment, and a clear rationale tied to a business outcome.
"We should upgrade the firewall" is not a strategy. "The current firewall doesn't support the network segmentation required for PCI DSS compliance, and non-compliance puts our card processing agreement at risk by Q3". that's a strategy the board can act on. Every recommendation in the roadmap follows this structure: what's the problem, what's the business consequence of not fixing it, what's the solution, what does it cost, and when does it need to happen.
The roadmap gets presented to the board. Debated. Refined. Prioritized against available budget and organizational capacity. And then we start executing, with clear milestones and accountability that the board can track without having to understand the underlying technology.
After the first 90 days
Once the roadmap is in place, the engagement shifts from assessment to execution and governance. The specific cadence depends on how much is in flight, but a typical fractional CTO engagement runs at two to four days per month once it's past the initial assessment phase.
Monthly board reporting. Quarterly vendor reviews. Ongoing budget governance against the roadmap. Contract negotiations when renewals come up. Architecture oversight when projects require network or infrastructure changes. Security posture monitoring and periodic reassessment. Team management and development. Because the IT team's capability is the single biggest determinant of whether the roadmap actually gets delivered or just sits in a shared drive.
The engagement continues until one of two things happens: the organization grows to the point where a full-time CTO or IT director is justified and we help hire our replacement, or the organization decides that the fractional model is actually the right permanent model and we continue indefinitely. Both outcomes are fine. We're not trying to create dependency. We're trying to get the technology function to a place where it's governed properly, whatever that governance model looks like.
How we approach critical connectivity
Every network project starts with the same question: what does this network actually need to do? Not what equipment you already own. Not what your current provider is comfortable supporting. Not what the vendor demo looked like. What does the operation require, and what happens when it fails?
That second question matters more than most people think. The failure mode defines the architecture. A corporate office losing internet for an hour is annoying but survivable. A broadcast feed dropping during a live transmission is a contractual breach that damages reputation and revenue. A vessel losing connectivity in open water is a safety issue with regulatory implications. A military communication system going dark is something far worse. The design has to reflect the stakes, and the stakes determine how much redundancy, diversity, and failover complexity is justified.
Requirements first, technology second
We document the requirements before we look at a single product datasheet. Bandwidth requirements (sustained and burst). Latency tolerance. Jitter sensitivity. Availability targets expressed as a percentage and as a maximum tolerable outage duration. Geographic constraints. Operational constraints. Specifically, who is actually going to operate this system, and what's their technical skill level? Regulatory requirements. Budget reality. Environmental constraints: temperature, humidity, power availability, physical space, vibration.
All of it written down and agreed with the stakeholder before we start designing. Because the number one cause of failed network projects isn't bad technology. It's building to the wrong specification. And the number one cause of wrong specifications is starting with the technology and working backward to justify it, instead of starting with the requirement and selecting the technology that meets it.
Vendor-agnostic selection
We evaluate every viable option. Not just the two vendors we've used before, not just the Gartner Magic Quadrant leader, not just the cheapest. We build a comparison matrix against the actual requirements, weight the criteria based on what matters to this specific deployment (not what the vendor's benchmark tested), and make a recommendation we can defend to someone who understands the technology and to someone who doesn't.
Documentation that survives departure
Every design gets documented to a standard where someone else could build it from scratch without talking to us. Network topology diagrams. IP addressing schemes with rationale. VLAN maps. Routing tables. Failover procedures with step-by-step instructions. Escalation paths with contact details. Configuration backups. Test results. Commissioning evidence.
If the documentation can't survive the departure of the person who wrote it, it's not documentation. It's notes. We've inherited too many networks where the entire knowledge base was in one person's head, and when that person left, the organization was effectively starting from scratch. We don't do that to our clients.
Test before you trust
We test in a way that simulates real failure conditions, not just the happy path. What happens when the primary link drops? What about the primary and secondary simultaneously? Does the application actually tolerate the failover time? Does the failover work the same way at 2am with nobody watching as it does during a controlled test? Does the system recover gracefully when the primary comes back, or does it need manual intervention?
If we can break it in testing, it'll break in production. Better to find out in a lab than in front of a live audience. We've walked away from vendor equipment that passed every datasheet test but failed under realistic load conditions. The datasheet said it could handle the throughput. It could. For about forty-five minutes. Then the buffer management fell over. Testing found it. Production would have been a disaster.
Flexible enough to work. Structured enough to deliver.
Fractional CTO / Virtual IT Director
Typically two to four days per month once past the initial assessment phase. We attend board meetings, manage vendor relationships, direct the IT team's priorities, and own the technology strategy. Most engagements start at the higher end of time commitment and taper as things stabilize. The goal isn't permanent dependency. It's to build the governance capability until you're ready for a full-time hire, or until the fractional model proves to be the right long-term answer.
Network design and deployment
Project-based, with a defined scope and timeline. We design the architecture, document it thoroughly, and either deploy it ourselves or supervise deployment by your team or a third-party integrator. We hand over full documentation and knowledge transfer at commissioning. A well-designed network shouldn't need its architect on speed dial. And if it does, the design wasn't good enough.
Strategic advisory
Shorter engagements for specific decisions: vendor evaluation, M&A technology due diligence, security posture assessment, network audit, carrier contract review. Typically two to eight weeks of focused work resulting in a clear recommendation and supporting analysis. We write it up properly. Not a slide deck with bullet points, but a document that stands on its own and can be shared with stakeholders who weren't in the room when the assessment was done.
What we deliberately don't do.
We don't resell hardware or software
We never earn a margin on what we recommend. The moment we start selling equipment, our advice stops being independent. We specify products, write procurement requirements, and help you buy at the right price from the right supplier. But we don't clip the ticket.
We don't do break-fix support
Day-to-day managed IT support is delivered through The Tech Factory, our operational arm, or through a managed service provider of your choice. We design and deploy. They run and support. The separation keeps our advisory work clean.
We don't do open-ended retainers
Every engagement has a defined scope, clear deliverables, and an exit condition. If the work expands, we renegotiate the scope explicitly rather than letting it creep. You'll never get a surprise invoice for work that wasn't agreed.
We don't write reports that sit in a drawer
Every assessment, audit, or strategy document we produce is written to be acted on. If we can't tie a recommendation to a specific business outcome and a realistic implementation path, we leave it out. Nobody needs an eighty-page PDF that makes everyone feel busy but changes nothing.
We don't do body shopping
We don't place contractors and take a cut. If you need permanent headcount, we'll help you write the job specification, screen candidates, and manage the transition. But we won't bill you monthly to occupy a desk and call it consulting.
Start with a conversation.
Tell us what you're dealing with. We'll tell you honestly whether we can help.
Get in touch