What gStride Ships Today (and What's Next): 2026 Coverage Matrix

An honest, line-by-line matrix of what gStride ships today, what is on the 30-day window, and what sits in the 90-day window — across the four-layer architecture, the compliance posture, the integrations grid, and the vertical readiness map. Written because the most common reason a mid-market deal stalls in late-stage evaluation is that the buyer cannot tell what a vendor really does today versus what is on the roadmap. We would rather show the gap in writing than discover it under an SLA penalty.

TL;DR — the short answer

In one paragraph

gStride today ships a full four-layer productivity intelligence platform: signal capture across calendar, project tracker, repo, idle detection and timelines; team-level signals including focus mosaic, cycle time, async velocity and meeting drag; manager-facing recommendations on the Monday-move surface; and in-platform actions including calendar focus blocks, rescope, approval and payroll inputs. Compliance posture clears most enterprise procurement gates today — SAML SSO, SCIM, DPA, audit trail, retention policy — with SOC 2 Type II in audit and full EU AI Act high-risk readiness sitting inside the 90-day window. We do not ship keystroke logging, mouse activity, screenshot-on-by-default, webcam, or message-content surveillance, and we will not build them.

  • Today is real. The shipped column maps cell-by-cell to the live feature surface and the related blog references in this post. No vapour.
  • 30-day items are committed. Each one is in active development with a target merge date inside 30 days of publication.
  • 90-day items are explicit hedges. Where we say "evaluating in 90d" we mean the work is scoped but the ship decision is on the next quarterly review. Where we say "shipping in 90d" it is in active development.
  • Anti-features are contractual. The five things we do not build are negative covenants in the master subscription agreement, not just marketing posture.
  • The matrix is the contract before the contract. Roadmap-accuracy SLA with a refund clause sits in section 8.

Why publish a coverage matrix at all

Two of the last five founder-led discovery calls I ran stalled in late-stage evaluation because the buyer could not get a clear answer to one question: what does this product ship today, and what is on the roadmap? Both buyers had been burned before — one by a six-month migration that under-delivered, the other by a twelve-month rollout where the headline feature in the original demo was still in beta at the renewal. Both said, in different words, that they would rather see the gap written down than be sold a promise.

Most SaaS hides the roadmap behind some version of "trust us." The honest move is to publish the matrix on the same URL the procurement team can quote in the master subscription agreement, then update it every sprint and let the dateModified field be the audit trail. This page is the gStride version of that, mirroring the architecture we walk every buyer through in the four-layer AI workforce analytics pillar. If you printed a version older than 14 days, the cells you care about may have moved — request the current version via Cal before procurement closes.

The 4-layer coverage matrix

The heart of this post. Four rows — capture, signal, recommendation, action — mapped against three columns: shipped today, 30-day window, 90-day window. Anchored to the architecture described in Pillar #3 on AI workforce analytics.

LayerShipped today30-day window90-day window
Layer 1 — Capture
Reads work signals from tools the team already uses
Shipped
Calendar integration, project and task tracker (Kanban + list views), tracked time sessions, idle detection with configurable thresholds, timeline auto-generation, screenshot capture (configurable per org/team/user, off-by-default optional)
30 days
Deeper Slack metadata read for async-velocity signal, Linear and Notion read APIs widened, document-system event capture for cycle time
90 days
Custom-API webhook surface for in-house tools, mobile-first capture for field and frontline contexts (evaluating)
Layer 2 — Signal
Classifies raw inputs into actionable patterns
Shipped
Focus mosaic on calendar feed, cycle time on ticketing system, async velocity on response latency, meeting drag percentage per role, productivity heatmaps per user, AI productivity scores per task and per day, top-task ranking, overrun detection on estimate-vs-actual
30 days
Capacity-vs-demand fit signal (committed work vs 4-week rolling throughput), team-baseline drift indicator
90 days
Anomaly scoring on signal drift (evaluating), explainability widget extension to all signals (shipping)
Layer 3 — Recommendation
Proposes the specific action to address each signal
Shipped
Manager pulse view ("Monday-move" surface) on team aggregates, overrun rescope suggestions, top-task replication prompts, idle-driven nudge for break or focus
30 days
Calendar-intervention recommendation (block focus, decline meeting, recurring pause) generated inline against meeting-drag signal
90 days
Auto-rescope proposals on capacity-vs-demand fit alerts (evaluating), workload-rebalance recommendation across IC peers (shipping)
Layer 4 — Action
Closes the loop in-platform
Shipped
Block focus on calendar from inside platform, rescope task from overrun alert, manual-time approve/reject by reviewer, payroll input release (final-release action), regularization approval, leave approval flow
30 days
Slack-native action ("approve from message"), mobile manager approval for time and leave
90 days
One-click compliance export bundle (EU AI Act conformity assessment artefacts, DPA references, audit-trail JSON) — shipping

The shipped column is the surface we walk through on every demo. The capture and signal rows lean on the productivity-monitoring and AI-assistance feature surfaces, the recommendation row on the manager-pulse and overrun affordances, and the action row on the timelines-approvals and payroll-payments flows. The deeper architectural reasoning behind why these four layers are the right shape is in Pillar #3; the deeper signal-list breakdown is in AI productivity scoring for remote employees.

What we do not ship (and will not build)

The interesting half of any honest coverage matrix is the column that does not exist — the things the vendor will not build. Five features are explicit anti-features in the gStride architecture. They are not absent because we have not gotten to them. They are absent because the category we are building does not include them, and bolting them on would damage the product. Framing them as a negative covenant rather than a missing feature is the part most buyers find unusually clarifying.

Anti-feature 1 Keystroke logging.

Keystroke counts correlate close to zero with knowledge-work output and over-credit typing-heavy roles while penalising reading-heavy roles. We do not capture them, we do not derive signals from them, and we contractually commit not to introduce them in any version of the platform. The alternative-signal framework is in the real alternative to keystroke tracking.

Anti-feature 2 Mouse-activity tracking.

Mouse movement is a proxy for nothing useful at the team level. It rewards mouse-jiggle workarounds and penalises focused reading. gStride does not capture mouse activity as an analytics signal. Idle detection is calendar-aware and app-context-aware, not mouse-aware.

Anti-feature 3 Screenshot-on-by-default with no granular toggle.

Screenshots have a defensible role in narrow, opt-in scenarios — billable-hour client transparency, regulated audit, specific incident investigation. We support those use cases. We do not turn screenshots on by default at the org level and we will not ship a configuration that locks them on without per-role or per-team override. The deeper framework is in productivity monitoring without surveillance.

Anti-feature 4 Webcam capture.

Camera-on monitoring is surveillance under any framing and is a hard regulatory liability under the EU AI Act high-risk class for workplace AI. We do not build it.

Anti-feature 5 Message-content surveillance.

gStride reads metadata — response latency, decision-loop length — from chat systems where the customer authorises it. We do not read message contents and we do not run sentiment analysis on private team chat. The line between metadata and content is the line between analytics and surveillance, and we hold it.

Compliance and procurement readiness

Most mid-market and enterprise procurement processes gate on the same handful of items: SAML SSO, SCIM provisioning, a DPA with Article 28 terms, an audit trail per access, a documented retention policy, and the relevant industry-specific certifications. The grid below is the current state, mapped against the same three-column shape as the architecture matrix.

ItemShipped today30-day window90-day window
SAML SSOYes
SCIM provisioningYesEnhanced (group sync, automated deprovisioning hardening)
DPA with GDPR Article 28 termsYes
Audit trail per accessYes
Documented retention policyYes (default 30 days for granular signal; configurable)
SOC 2 Type IIIn auditReport ready (shipping)
HIPAA BAARoadmap (evaluating; gated by healthcare-tech deal close)
EU AI Act high-risk class complianceDocumented stance, explainable signals, IC inspection rights, audit trailFull conformity readiness pre-Aug 2026 enforcement (shipping)
India DPDP Act 2023 postureDocumented; consent notice and retention defaults applied
UK GDPR addendumYes (mirrors EU GDPR Article 28 terms)

The compliance row that matters most for August 2026 is the EU AI Act readiness column. The shipped-today posture covers the requirements for explainability, IC inspection rights, human-in-the-loop recommendation, and documented retention. The 90-day commitment is the full conformity-assessment bundle — the artefact set procurement teams in EU jurisdictions are starting to request as standard. The broader regulatory framing is in is employee monitoring legal in 2026.

Integrations matrix

Integrations are where most coverage matrices quietly evade — the vendor lists every logo they have ever touched and the buyer has to ask which ones are live, which are private beta, and which are aspirational. The grid below is honest about the distinction.

IntegrationShipped today30-day window90-day window
Google CalendarLive
Microsoft Outlook / Office 365Live
JiraLive
GitHubLive
GitLabLive
Slack (metadata for async velocity)Live (read-only metadata)Action surface (approve from message)
LinearLive (basic read)Deeper cycle-time signal
NotionLive (basic read)Document-event capture for blocker signal
AsanaLive
ClickUpLive
BitbucketShipping
Microsoft Teams (metadata)Shipping
ConfluenceEvaluating
Custom-API webhook surfaceShipping
Payroll provider exports (Indian PF/ESI/PT/TDS, US generic CSV, UK PAYE-ready CSV)Live (CSV/PDF export)Direct provider sync (evaluating)

Vertical readiness

A platform that ships well on one vertical and badly on another is a normal mid-stage SaaS reality. The honest move is to publish which verticals the product fits today, which it can fit in 30 days with minor configuration, and which sit in the 90-day evaluation window because the vertical has structural needs we have not yet fully covered.

VerticalShipped today30-day window90-day window
IT services (India / global mid-market, 30–200 employees)Strong fit — primary ICP
BPO and customer operationsStrong fit — shift, leave, attendance, payroll bundled
Agency and creative (digital marketing, design studios)Strong fit — project + payroll + billable-hour reporting
Engineering-led product teams (10–100 engineers)Strong fit — focus mosaic, cycle time, PR cadence signals
Manufacturing and frontline (line operators, plant supervisors)Configurable fit (kiosk + biometric flow live; OEE integration in 30-day window)
Healthcare-tech and clinical practiceEvaluating — HIPAA BAA gated; PHI-aware screenshot logic on roadmap
Financial services and regulated SaaSEvaluating — SOC 2 Type II report + EU AI Act conformity bundle gating

The verticals where today is real are the four ICP rows — IT services, BPO, agency, and engineering. The vertical guides for those segments — including the BPO India, manufacturing, and law firm guides — go deeper into the fit-and-gap analysis per segment. If your vertical is on the 30-day or 90-day row, the right conversation is on Cal, not in a procurement document yet.

What happens when we miss a roadmap commit

A coverage matrix without an accountability mechanism is marketing. The mechanism gStride attaches has three parts, all sitting in the master subscription agreement alongside the standard SLA. First, the roadmap-accuracy SLA: if a 30-day item slips beyond 45 days from publication, or a 90-day item beyond 120 days, the customer who signed on the strength of that commitment can request a pro-rata refund for the affected quarter on the dependent line items. The clause is a forcing function on us to either ship or de-commit early — not punitive.

Second, the public changelog. Every 30-day cell that ships gets an entry; every cell that is removed in a future matrix version gets a why-note (scope change, deprioritisation, re-scope). The audit trail of what was promised, shipped, and de-committed lives in one place and is queryable by name.

Third, the matrix is referenced in the MSA as the authoritative document for the customer's effective date. Shipped-column cells are warranted capabilities under the standard SLA. 30-day and 90-day cells are roadmap commitments under the refund clause. Anti-features are negative covenants — capabilities we contractually commit not to build during the term, including via acquisition.

A roadmap document the customer can quote in procurement is worth more than a sales-cycle slide deck, even when (especially when) the customer chooses to enforce the refund clause. That is the contract working as designed.

Five-question coverage test for buyers

If you read this far and you are evaluating any productivity intelligence or workforce analytics vendor — including gStride — the five questions below are the ones to put to your current vendor before you renew, and to any new vendor before you sign. They are designed to surface the gap between the demo and the shipped product, which is where most procurement decisions go wrong.

  1. Where is your shipped-vs-roadmap matrix and when was it last reviewed? A vendor that cannot send you the URL by end of day is selling demos, not a product. The dateModified field on the page is the single most useful number you will see in the evaluation.
  2. What anti-features have you contractually committed not to build? A vendor that has not written down the negative covenants is keeping the option to bolt on surveillance later. That option becomes a liability the moment EU AI Act enforcement starts, and it transfers to your procurement file.
  3. What is your SLA on roadmap accuracy? If the answer is "we do our best" or "we have a quarterly review cycle," there is no SLA. A real SLA names the slip window, the refund mechanism, and the document that records both.
  4. Which integrations are live versus which are private beta or aspirational? Ask for the same three-column shape as section 6 above. If the answer mixes them in a single logo grid, the vendor is hiding the gap.
  5. For my specific vertical, where are you on the today / 30-day / 90-day grid? Vertical fit is the single biggest reason late-stage deals stall and the single biggest source of buyer's remorse 90 days post-signature. A vendor that cannot place themselves on the grid for your vertical is asking you to absorb the discovery cost.
The buyer-test summary: if a vendor clears all five, you are looking at a 2026 product. If a vendor clears three, you are looking at a 2024 product with marketing. If a vendor clears fewer, you are looking at a renewal you should not sign. The deeper procurement framework — including the 47-question RFP and the demo-audit pattern — is in the productivity software RFP template and the 7-step comparison framework.

Where this matrix connects to the rest of the gStride buyer surface

This page is one node in a longer chain. Architecture behind the four-layer matrix: Pillar #3 on AI workforce analytics. Enterprise scoring sub-layer for billable timesheet validation: Pillar #5 on enterprise AI timesheet scoring. Anti-surveillance framing for the anti-features section: Pillar #4 on the anti-surveillance productivity stack. ROI math against a 64-employee buyer scenario: ROI calculator. Pricing anchor: pricing page. EU AI Act conformity policy template: gStride playbook.

Frequently asked questions

Why publish a coverage matrix at all?

Because the most common reason mid-market deals stall in late-stage evaluation is that the buyer cannot tell what a vendor ships today versus what is on the roadmap. We would rather show the gap in writing than discover it under an SLA penalty after procurement closes. The matrix is the contract before the contract.

How current is this matrix?

The matrix is reviewed at the close of every sprint and re-published on the same URL. The dateModified field in the page schema is the authoritative source for last review. If you printed a version older than 14 days, request the current one via Cal or pricing enquiry — the gap is usually one or two cells, but the gap matters.

What happens if a roadmap item slips?

Every roadmap commitment in this matrix carries a documented target window — 30 days or 90 days from publication date. If a 30-day item slips beyond 45 days from publication, or a 90-day item slips beyond 120 days, customers who signed on the strength of that commitment can request a pro-rata refund for the affected quarter. The terms sit in the master subscription agreement, not just on this page.

Why are some 90-day items hedged as "evaluating" rather than "shipping"?

Because we will not commit to ship in 90 days what we have not yet scoped past internal design review. "Evaluating" means the item is on the next-quarter review cycle and the team is doing the scoping work to decide whether it ships, ships later, or does not ship. "Shipping" means the work is in active development with a target merge date inside the 90-day window. The distinction is deliberate.

What does gStride not ship and will not build?

Five anti-features are explicitly out of scope and will remain so: keystroke logging, mouse-activity tracking, screenshot-on-by-default with no granular toggle, webcam capture, and message-content surveillance. These are not roadmap omissions — they are architectural commitments. The category we are building is productivity intelligence, not behavioural surveillance, and the anti-features list is part of the EU AI Act high-risk hedge baked into the platform.

Is gStride ready for procurement teams in regulated industries?

Most enterprise procurement gates are cleared today: SAML SSO, SCIM provisioning, DPA with GDPR Article 28 terms, audit trail per access, and documented retention policy. SOC 2 Type II is in audit with the report expected in the 90-day window. HIPAA BAA is on the roadmap and gated by an active healthcare-tech deal closing. The full grid is in the compliance section above.

How does the matrix interact with the master subscription agreement?

The matrix is referenced by name in the MSA as the authoritative roadmap document for the customer's effective date. Cells marked shipped today are treated as warranted capabilities under the standard SLA. Cells in 30-day and 90-day columns are treated as roadmap commitments with the refund clause described above. Anti-features are negative covenants — capabilities we contractually commit not to build during the term.

Related reading on gStride

Walk the matrix with us before procurement closes

If you are deep into evaluating gStride and any cell on the matrix is the gating one for your decision, book a 30-minute walk-through. We will mark the cells you care about with current sprint status, and if any cell has moved since this page was last reviewed we will say so on the call.

Book the matrix walk-through See pricing