As lawmakers and tech platforms debate how to regulate teens on social media, many families are overlooking a critical reality: digital habits start much earlier. By elementary school, kids are already engaging in group chats, online games, streaming platforms, and messaging tools. Between ages 8 and 12, they start forming the communication patterns and device habits that often carry into adolescence. By the time they join traditional social media platforms, many of these behaviors are already established. Kate Doerksen, Cofounder and CEO of Sage Haven, explores why supporting healthy digital development during these early years can help shape more positive online experiences later on.
1. Most policy debates and lawsuits focus on teens and social media platforms, but kids’ digital lives start years earlier. What’s happening in that gap, and why does it matter right now?
There’s a critical window between ages 8 and 12 when the majority of U.S. kids move online in a more acute way—through online video games, online video streaming, and online messaging. This starts even before they have smartphones through tablets, smartwatches, video game consoles, and computers.
While legislators are focused on protecting teens from Instagram and TikTok, it’s actually elementary and early middle school where many kids establish their foundational digital behavior. This seems to fly under the radar for most policy discussions. While legislators and attorneys general are focused on protecting teens from Instagram, Snapchat, and TikTok, elementary and middle schoolers are already navigating complex social dynamics in group chats. They’re experiencing exclusion, learning to handle conflict, and forming their first understanding of what’s acceptable online without the guardrails or attention that older teens’ social media use receives. They also message primarily on apps that are designed for adults, like iMessage, Google Messages, Discord, etc., where they get over-notified and develop bad habits of incessantly checking their devices.
This gap matters urgently because these aren’t just “practice” years. The communication patterns kids develop at 9 or 10 become the foundation for how they’ll behave at 14 or 16. If a child’s first digital social experiences involve unchecked gossip, exclusion, or normalized meanness in group chats, and digital addiction, those patterns solidify. By the time they reach the platforms everyone’s worried about, the damage is already done. We’re essentially waiting until kids have spent years reinforcing unhealthy digital behaviors before we intervene. The smarter approach is shaping those behaviors from the start, in the environments where kids are actually communicating today, including messaging apps, video games, YouTube, etc.
2. Before TikTok, Instagram, or Snapchat, many kids are already deeply embedded in group chats on watches, tablets, and starter phones. What kinds of behaviors and challenges are you seeing emerge in these early digital spaces?
Many parents begin chatting online with their own kids between first and third grades. Most kids begin chatting with peers in elementary school between second and fourth grades. Small chats typically start amongst a few close friends who want to connect and shift to larger group chats between third and fifth grade. Group chats can be quite large, with an entire class of kids in a single thread.
The challenges in these early messaging spaces mirror what happens on teen social platforms, but kids this age lack the emotional maturity and social experience to navigate them effectively. We see group chat dynamics that would be familiar to any middle schooler: kids being intentionally excluded from chats, gossip spreading rapidly through friend groups, conflicts escalating because tone is misread in text, and the pressure to respond instantly, even during homework or family time.
What’s particularly concerning is how early kids are encountering manipulation tactics. They’re experiencing things like being added to and removed from group chats as a form of social control, screenshots being shared out of context, or being pressured to take sides in conflicts between friends. These aren’t just minor squabbles—they’re formative experiences that teach kids how to treat others online and what treatment to accept themselves.
The other major challenge is the total absence of adult awareness. Parents often don’t realize the sophistication of social dynamics happening in what they assume are just “cute kid chats.” Teachers and schools have limited visibility into messaging that happens on personal devices. So kids are essentially socializing in an environment where the adults who would normally help them navigate tricky social situations simply aren’t present. They’re learning trial by fire, often making mistakes that damage friendships or reinforce toxic patterns that will follow them for years.
Like most life skills our children need to learn, online chatting must be taught! When left unsupervised, online chatting (especially in large groups) is frequently the origin of unhealthy technology habits like checking a device every few minutes as well as exposure to inappropriate content, online bullying, and social comparison.
Similar to teaching our kids to drive a car, we must teach them how to chat online. They aren’t ready to drive on a cross-country road trip with friends the day they get their driver’s license. The same logic applies here.
We need to ease kids into online chatting with supervision and coaching until we are comfortable they are prepared for unsupervised online messaging.
3. Why are the early years of kids’ digital communication often overlooked, and how do those first online interactions shape long-term habits and expectations?
These years are overlooked for a few interconnected reasons. First, there’s a perception gap: parents and policymakers tend to think of “real” digital risk as beginning with social media, so messaging between 9-year-olds feels low-stakes by comparison. Second, these early digital interactions are fragmented across watches, games, and messaging apps rather than happening on a single visible platform, making them harder to monitor or regulate. Third, we’ve collectively underestimated how sophisticated kids’ social lives are at this age and how much emotional weight their peer relationships carry.
But here’s what we’re missing: these first online interactions are when kids internalize the rules of digital communication. A 10-year-old who learns that being mean in group chats gets social status doesn’t suddenly develop empathy when they turn 13 and join Instagram. A child who experiences being excluded from chats (or participates in excluding others) is learning that this behavior is normal. These early years establish a child’s baseline expectations for what online communication looks like: Is it kind or cruel? Inclusive or exclusionary? Do adults care about what happens here, or is this a space without accountability?
The habits formed in these years are remarkably sticky. Research on childhood development shows that social behaviors established in late elementary and early middle school are strong predictors of later patterns. If a child’s first several years of digital interaction normalize toxic behavior, we’re dealing with learned patterns that become harder to reshape as kids get older. By the time they’re teenagers with more autonomy and more sophisticated platforms, those early patterns are deeply ingrained.
4. AI is often viewed with caution when it comes to children. How can thoughtfully designed AI play a role in guiding healthier communication and addressing issues before they escalate?
The caution around AI and children is warranted! We absolutely need to be thoughtful about how these systems are built and deployed, and whether our LLMs are safe-by-design with the appropriate levels of transparency.
There also needs to be thoughtful parental controls and guardrails. I’m a big advocate that any unsafe or mature answers or content should not be shown unless the user is logged in and age verified as an adult. You shouldn’t be able to access unsafe or mature answers or content simply by logging out to avoid your parental controls, which is how OpenAI has set up their (easily circumnavigated) parental controls today.
It’s also true that AI can be a powerful tool in keeping kids safe online with improved moderation in group chats and sophisticated tools to block unsafe content and sites. For example, with Sage Haven, our safe messaging and voice calling app for kids, we use AI to block harmful messages before they are even sent and coach the kids how to message well with AI-powered dynamic nudges towards kindness. Before a mean message is sent, we can pop an alert that says, “This looks like it could hurt their feelings. What if you say this instead?” with recommended language. It also can be used to teach kids messaging best practices and etiquette. When kids are speaking one-on-one in front of a big group chat, we can send an alert and prompt them to move their conversation to a private thread. Over time, kids internalize these prompts and develop digital literacy skills.
In our interviews with parents, we realized they also need more support helping their kids slowly, safely on ramp to technology. We learned that nearly 3/4s of parents already have a social contract with their 8 to 12-year-old kids to supervise and spot-check their messages. Today, they most commonly just grab kids’ devices and scroll backwards. We realized we could utilize AI to make this easier and more efficient for parents. With Sage Haven, parents can approve every contact, which eliminates spam and helps slow down the onramp of messaging, and supervise messages from their own phone with AI alerts that flag concerning patterns or specific messages. This allows them to coach their kid through tricky interpersonal dynamics and prevent issues before they escalate.
As the kids become teens and are ready for more privacy, we scope down full supervision at age appropriate intervals while still giving parents the safety net to receive alerts on dangerous or serious issues.
5. Many parents feel overwhelmed navigating technology choices on their own. How does Sage Haven help create shared expectations and healthier digital norms for kids, rather than leaving families to figure it out individually?
Navigating technology access for kids is the #1 pain point of modern parenting. The current landscape puts enormous pressure on individual parents to be technology experts and build collective action in a vacuum. One family tries to delay smartphones until high school, while their child’s friends all have them by fifth grade. Another family allows unlimited messaging, creating pressure on families with stricter boundaries. Parents are making isolated decisions with incomplete information, often feeling like they’re either being overprotective or negligent. It’s an impossible position.
So for starters, we saw a gap in the information available for parents (which focused more on teens and social media), so we published a free guidebook for parents with kids between the ages of 5 and 12 complete with actionable steps with step-by-step guides and how-to videos. This helps overwhelmed parents set up parental controls and think through what boundaries are right for their family. You can check out our free guidebook here.
We also advocate for regulatory changes to better protect kids online. This honestly shouldn’t be solely on the shoulders of parents. There should be tighter guardrails and accountability for technology developers, and we are working on moving these laws and conversations with regulators forward.
Lastly, we build safer technology alternatives and have a big vision for a safer internet for our kids. Our first product is a safe messaging and voice calling app for kids called Sage Haven. It’s interoperable with iMessage, Google Messages, etc., so if your kid uses the Sage Haven app, you get all of the benefits regardless of what other families choose. It lets parents approve every friend and family member of their kid, so they can slow down the on-ramp of online communication. It empowers parents with supervision from their own phone with AI alerts and recaps for ease. And the entire experience is designed to be non-addictive, so kids can message safely with real friends without learning to check their devices constantly.
6. If we got the “before social media” years right, how do you think that would change kids’ long-term relationships with technology, and what would success look like a decade from now?
If we got these foundational years right, we’d raise a generation of kids who see technology as a tool for genuine connection rather than a space for performing, competing, or tearing others down. They’d enter their teen years with established habits of thoughtful communication, having learned early that their words matter and that there are real people on the other side of every screen. They’d have practice navigating conflict constructively, experience with setting boundaries around their digital time, and an internal compass for what healthy online interaction looks like.
A decade from now, success would look like drastically reduced rates of cyberbullying, because kids learned early that exclusion and cruelty aren’t acceptable, whether they happen in person or online. It would look like young adults who can be present in their physical lives without constant digital distraction, because they developed a healthy relationship with technology from the start, rather than having to break addictive patterns later.
More broadly, success would mean moving from our current reactive approach, where we wait for teenagers to experience serious mental health consequences before intervening, to a proactive approach where we shape digital environments to support healthy development from the beginning. We’d stop accepting that childhood and adolescence must involve widespread digital harm as an inevitable rite of passage.
Instead, we’d look back and wonder why we ever thought it was acceptable to let children’s first social experiences online happen in completely unmoderated and addictive spaces. Getting these early years right isn’t about protecting kids from technology, it’s about ensuring technology supports the childhood they deserve.

