Impact investing has a language problem. Terms like "positive social outcomes," "sustainability-aligned," and "ESG-integrated" are everywhere, but they mean different things to different funds, different investors, and different portfolio companies. Without rigorous, consistent measurement, impact claims become marketing copy rather than accountability tools. At Sway for Future, we believe the future of the asset class depends on our collective willingness to make impact measurement as disciplined — and as demanding — as financial measurement.

This piece is an attempt to open the hood on how we do that. We will walk through why impact measurement matters at the fund level, how we've structured our framework around IRIS+ and theory-of-change principles, which five core metrics anchor every portfolio evaluation, how we communicate results to our limited partners, and where we still struggle to get it right. There is no perfect methodology in impact investing, but there is a meaningful difference between a rigorous attempt and an approximate one. We aspire to the former.

Why Impact Measurement Matters Beyond Compliance

The most common argument for impact measurement is reputational: it protects a fund from accusations of greenwashing. That is a real concern, and rigorous measurement does serve that function. But it is also the least interesting reason to build a serious measurement practice. The more compelling case is that measurement changes behavior — inside portfolio companies, inside the fund itself, and across the ecosystem of capital that flows toward or away from certain kinds of founders and markets.

When a portfolio company knows that Sway for Future will be asking precise questions about carbon avoidance at every board meeting, it changes how that company builds its product roadmap. It creates internal incentives to track data that would otherwise fall through the cracks. When LPs receive a semiannual impact report that presents verified, audited data rather than anecdotal success stories, it changes how they think about deploying capital in subsequent funds. Over time, measurement creates a feedback loop that makes the entire system smarter.

There is also an investment quality argument. Companies that understand their own impact — that can articulate the causal mechanism by which their product improves outcomes — tend to have clearer product thinking, more defensible market positions, and a stronger relationship with the customers and communities they serve. Measurement is often a proxy for organizational clarity. A founding team that cannot explain how they know their intervention works is a founding team that may not fully understand what they are building.

Finally, the asset class as a whole will only attract the scale of institutional capital that the world's sustainability challenges demand if impact investors can demonstrate credible, comparable returns — financial and non-financial. The infrastructure for that comparability is being built right now, and the funds that invest in rigorous measurement today are positioning themselves as the standard-setters of tomorrow.

Building on the IRIS+ Framework

Our framework starts with IRIS+, the catalogued system of standardized metrics maintained by the Global Impact Investing Network (GIIN). IRIS+ provides a common language for impact measurement across asset classes, geographies, and impact themes. It includes more than 600 metrics organized by strategic goal, and it aligns with the UN Sustainable Development Goals, making it legible to a wide range of institutional LPs who are themselves navigating SDG-reporting obligations.

We do not use all 600 metrics. No fund should. The value of IRIS+ lies not in comprehensiveness but in standardization: when Sway for Future reports that a portfolio company improved access to health services for 47,000 people in underserved communities, we are using the IRIS+ definition of "underserved," the IRIS+ methodology for counting unique beneficiaries, and the IRIS+ distinction between people reached and people whose outcomes measurably improved. That precision is what makes our data comparable to data produced by other IRIS+-aligned funds.

We typically select between 8 and 12 IRIS+ metrics per portfolio company at the time of investment, chosen to reflect that company's specific theory of change. A climate-tech company deploying grid-scale battery storage will have a different metric set than a health-tech company expanding diagnostic access in rural communities. The selection process happens during due diligence, is formalized in the term sheet as a reporting obligation, and is revisited annually as the company's business model evolves.

The Limits of Standard Metrics

We are honest with ourselves and our LPs about what IRIS+ cannot do. Standardized metrics are designed for comparability, not for capturing the full depth of a company's impact. They tend to measure outputs (services delivered, people reached, products sold) rather than outcomes (whether those services actually changed anything in people's lives) and almost never touch attribution (whether the change would have happened anyway without the company's intervention).

Bridging from outputs to outcomes and attribution requires additional qualitative work — customer surveys, third-party evaluations, longitudinal tracking of beneficiary cohorts — that is expensive, slow, and difficult to standardize. We invest in this deeper work selectively, for portfolio companies whose scale of potential impact justifies the cost and whose business models are stable enough to support multi-year outcome tracking. For earlier-stage companies, we rely more heavily on output metrics with explicit acknowledgment of their limitations.

Theory of Change as an Investment Tool

Every company in the Sway for Future portfolio articulates an explicit theory of change before we close an investment. A theory of change is a logical map of how a company's activities lead, step by step, to the social or environmental outcome it is trying to achieve. It answers the question: why do we believe this will work?

A well-constructed theory of change identifies inputs (capital, talent, technology), activities (what the company actually does), outputs (the immediate results of those activities), outcomes (the medium-term changes in people's lives or in the environment), and impact (the long-term systemic change the company is contributing to). It also identifies the assumptions embedded in each causal link — the conditions that must be true for one step to lead to the next.

We find that the theory-of-change exercise is most valuable not as a finished document but as a conversation. When a founding team walks us through their logic, we probe the assumptions. If a company's theory of change assumes that users will change their behavior after using the product, we ask what evidence they have that this is happening. If it assumes that scale will reduce cost enough to reach lower-income populations, we ask what the unit economics look like at 10 times current volume. These questions are not adversarial — they are an attempt to stress-test the causal chain before we commit capital.

Post-investment, the theory of change becomes a living document that we revisit annually with the founding team. When assumptions prove wrong — when the pathway to impact does not unfold as anticipated — we work with founders to revise the theory and update the associated metrics. This is not a sign of failure; it is a sign that measurement is actually influencing decision-making.

The Five Core Metrics We Track Across Every Portfolio

While each portfolio company has its own bespoke metric set, we track five portfolio-wide metrics that appear in every company's reporting obligations to some degree. These five were chosen because they reflect the thematic priorities of our fund, because they are measurable with reasonable rigor at the seed and Series A stages, and because they align with the areas where we believe technology-driven solutions have the greatest potential for catalytic impact over the next decade.

1. Carbon Avoidance

Carbon avoidance measures the reduction in greenhouse gas emissions that occurs because of a company's product or service, relative to the baseline scenario in which the product or service did not exist. It is distinct from a company's own operational emissions (which we also track but which are typically small for early-stage technology companies) and captures the external climate benefit delivered to the world.

We measure carbon avoidance in metric tons of CO2-equivalent per year, using the IRIS+ metric PI9929 as our anchor. For companies where direct measurement is feasible — a grid-connected energy storage system, for example, produces auditable displacement data — we require verified figures. For companies where direct measurement is difficult, we use conservative modeling methodologies aligned with the Science Based Targets initiative's guidance on value chain emissions.

We report this metric both at the individual company level and aggregated across the portfolio. As of mid-2025, our active portfolio companies collectively avoided an estimated 2.3 million metric tons of CO2-equivalent annually, a figure we expect to grow significantly as our climate-tech holdings scale toward commercialization.

2. Lives Improved

Lives improved is our broadest outcome metric. It counts the number of people who have experienced a meaningful positive change in their quality of life as a direct result of engaging with a portfolio company's product or service. The IRIS+ definition we use requires that the improvement be evidence-backed — not simply that someone used the product, but that using the product produced a documented change in a measurable life dimension: health status, income level, educational attainment, safety, or access to essential services.

This metric requires the most careful methodological work of the five. "Meaningful" is not self-defining. Our standards require that companies demonstrate impact through at least one of the following: peer-reviewed academic research on their intervention type, randomized controlled trial data from their own operations, a validated pre/post survey administered to a representative sample of users, or third-party program evaluation by a qualified independent organization. Absent this evidence, we record the metric as "output-level only" rather than counting it as a confirmed outcome.

Across our current portfolio, we count approximately 1.1 million people living in improved conditions attributable to Sway for Future-backed companies, with rigorous evidence standards applied consistently.

3. Access Gaps Closed

Access gaps closed measures the reduction in disparity between underserved populations and the general population in access to specific services or resources. This metric captures something that lives-improved does not: whether a company is reaching the people who need it most, or whether its benefits are flowing primarily to populations that already had reasonable access.

We define underserved populations using a combination of income criteria (households below 200% of the federal poverty line in the US context), geographic criteria (HRSA-designated Health Professional Shortage Areas, USDA-designated food deserts, and similar federal designations), and demographic criteria (populations historically excluded from financial services, healthcare, or education on the basis of race, ethnicity, disability status, or immigration status).

For each portfolio company serving retail or direct-to-consumer markets, we track the share of active users or customers who fall into one or more of these categories. A meaningful access-gap closure requires at least 40% of a company's active users to be from underserved populations, and requires demonstrating that the unit economics of serving those users are at parity with or better than the overall customer base — meaning the company is not cross-subsidizing impact with revenue from higher-income users in a way that would be unsustainable at scale.

4. Jobs Created in Underinvested Communities

Job creation in underinvested communities is a metric that acknowledges the role of economic inclusion in systemic impact. A company that creates high-quality employment opportunities in communities that have been structurally excluded from the knowledge economy contributes to long-term wealth building in a way that no product-level benefit can fully substitute.

We track this metric at two levels. At the direct level, we count full-time equivalent jobs created by portfolio companies in communities with median household incomes below 80% of the area median income. At the indirect level, for companies whose products enable other small businesses or independent workers to earn income — platform companies, marketplace companies, infrastructure-as-a-service companies — we count the income-earning participants in those ecosystems who meet our underinvested-community criteria.

Job quality matters as much as job count. We do not count gig-economy engagements without benefits or income stability as equivalent to full-time employment. Our methodology applies a quality-adjusted job metric that weights positions by healthcare coverage, median wages relative to living wage benchmarks, and stability indicators such as average tenure.

5. Resource Efficiency Improvement

Resource efficiency improvement captures gains in the ratio of productive output to resource input across our portfolio. For climate-tech and cleantech companies, this typically means energy, water, or material efficiency. For digital infrastructure companies, it may mean compute efficiency or data center power usage effectiveness. For agricultural technology companies, it may mean reduction in fertilizer, pesticide, or water inputs per unit of crop yield.

We express this metric as a percentage improvement in resource efficiency relative to the incumbent solution the company's product displaces or improves upon. A company that enables industrial facilities to reduce energy consumption by 30% per unit of output for the same operational performance receives a resource efficiency improvement score of 30%. We aggregate this metric across portfolio companies using a revenue-weighted average, giving more influence to companies that have reached commercial scale and therefore demonstrate real-world efficiency rather than laboratory benchmarks.

Reporting to Limited Partners

Our LP impact reporting runs on a semiannual cycle, aligned with our financial reporting cadence. Each impact report includes three sections: a portfolio-level summary, company-level scorecards, and a methodology note.

The portfolio-level summary presents aggregate performance on each of the five core metrics, year-over-year comparisons, and a narrative discussion of the factors driving changes in performance. We are deliberate about distinguishing between portfolio growth (more companies, more users) and impact intensity (deeper impact per unit of activity). An impact portfolio that grows primarily by adding more companies without improving intensity is not necessarily becoming more effective.

Company-level scorecards present each portfolio company's performance against its specific metric set, with traffic-light ratings (on track, monitoring required, off track) for each metric. We include a brief narrative from the company's leadership on any metrics that are off track, including the root cause of the underperformance and the corrective actions being taken. This creates accountability without punishing honesty — we want portfolio companies to surface impact challenges early, not hide them until they become crises.

The methodology note is the section most LPs skip but that we consider essential. It documents any changes to our measurement methodology since the prior reporting period, any metrics that shifted from estimated to verified status, any third-party audits or evaluations completed during the period, and any known limitations of the data we are presenting. Impact data is rarely perfect, and we believe the appropriate response to imperfection is documentation and disclosure, not false confidence.

Independent Verification

Beginning with our 2025 annual report, we have engaged a third-party verification firm to conduct spot audits of data submitted by portfolio companies. The verification process involves reviewing source documentation for claimed metrics, interviewing the staff responsible for data collection, and, where feasible, independently replicating calculations from primary data. We expect to verify approximately 30% of portfolio metrics each year on a rotating basis, so that every metric in the portfolio is independently verified at least once every three years.

This commitment to third-party verification is not universal in the impact investing space, and it adds cost. We justify that cost on the grounds that it protects our LPs from relying on unverified data, creates stronger incentives for portfolio companies to invest in their own data quality, and positions Sway for Future as a credible contributor to the growing body of evidence on what works in impact technology.

Challenges in Impact Measurement

Intellectual honesty requires us to be candid about where our measurement framework struggles. There are at least four persistent challenges that we have not yet fully resolved, and that we suspect no fund has fully resolved.

Attribution and Additionality

The most fundamental challenge in impact measurement is attribution: can we demonstrate that the outcomes we are counting would not have happened anyway, without the company's intervention or without Sway for Future's capital? This is the counterfactual problem, and it is essentially impossible to resolve with certainty at the company level. Markets change, populations improve or decline for reasons having nothing to do with any single intervention, and the baseline against which we measure change is itself constantly shifting.

We address attribution through two partial solutions. First, we require that theories of change identify the specific mechanism by which a company's product produces its outcomes, so that we are at least measuring the right things. Second, where peer-reviewed evidence exists on the efficacy of a given intervention type — telemedicine for chronic disease management, for example — we rely on that body of evidence to support the causal claim rather than requiring each portfolio company to replicate a randomized controlled trial independently.

Long Time Horizons

Many of the most important outcomes we care about — multi-generational wealth building, long-term health improvements, systemic shifts in how societies manage resources — operate on timescales of decades, not the 10-year fund lifecycle. This creates a structural mismatch between what we can measure and what we ultimately care about.

Our response is to invest in leading indicators: metrics that research suggests are predictive of long-term outcomes even if the outcomes themselves are not yet observable. Improved access to primary care is a leading indicator for reduced emergency department utilization and better chronic disease outcomes. Early financial inclusion — having a bank account, building a credit history — is a leading indicator for wealth accumulation. We are honest that we are measuring proxies, not ultimate outcomes, and we communicate that clearly to LPs.

Data Quality at Early Stage

Seed-stage companies often lack the operational infrastructure to collect reliable impact data. Founders are focused on product-market fit, not measurement systems. Requiring rigorous data collection from day one risks burdening early-stage teams with overhead that diverts energy from core building activities. But accepting low-quality data means our early-stage impact claims are largely unverified.

We navigate this tension by tiering our reporting requirements by stage. Seed-stage companies are asked for output-level metrics only, using whatever data they already collect as a natural byproduct of their operations. As companies scale toward Series A and beyond, we introduce outcome-level requirements and begin investing in their measurement infrastructure alongside our portfolio support activities. By the time a company is approaching growth equity, we expect it to have a fully operational impact measurement system that can support external verification.

Comparability Across Impact Themes

Our portfolio spans climate technology, health equity, financial inclusion, and education technology. These sectors measure impact in fundamentally different units — tons of carbon, quality-adjusted life years, dollars of wealth created, percentage points of literacy improvement. Aggregating across them into a single "impact score" requires value judgments about how to weight one type of outcome against another, and those judgments are inevitably subjective.

We have deliberately resisted the temptation to create a single portfolio-level impact score, because we believe such scores create an illusion of precision that the underlying data cannot support. Instead, we report each impact theme separately and allow LPs to apply their own weighting based on their institutional priorities. Some of our LPs care most about climate impact; others prioritize health equity. Our job is to provide credible, disaggregated data — not to pre-aggregate it in a way that obscures important distinctions.

Where We Are Going

Impact measurement is a practice, not a destination. The field is evolving rapidly, and we are evolving with it. Over the next two years, we plan to deepen our commitment to third-party verification, expand our outcome-level measurement for mid-stage portfolio companies, and invest in the longitudinal data infrastructure that will allow us to track beneficiary outcomes beyond the period of direct product engagement.

We are also closely watching the development of mandatory ESG disclosure frameworks in the US, EU, and UK. The SEC's climate disclosure rules, the EU's Corporate Sustainability Reporting Directive, and the IFRS Sustainability Disclosure Standards are all moving toward requiring more consistent, auditable non-financial reporting. As these frameworks mature, we expect to align our portfolio measurement systems with them to reduce reporting friction for our companies and to position our data within the broader institutional conversation about standardized impact disclosure.

Ultimately, we measure impact because we believe it makes us better investors. It keeps us honest about whether our capital is doing what we say it is doing. It challenges our portfolio companies to understand their own social and environmental function at a level of precision that most businesses never achieve. And it contributes — however incrementally — to the growing body of evidence that rigorous impact investing and strong financial returns are not in tension, but are, for the kinds of companies we back, deeply complementary.

We welcome scrutiny, feedback, and continued dialogue with the wider impact investing community on how our framework can improve. If you are an LP, a founder, or a peer fund with thoughts on measurement practices, we want to hear from you.

← Back to All Articles