Editor’s note: This editorial is published under Kevin’s name but it would not have been the same without the contributions by other DRN editors, particularly Lis Petersen, Brendan Hawke and Peter Bower. They have all lent their thoughts and areas of expertise to keep me on the straight and narrow.

 

If you’re not paying for something, you’re not the customer; you’re the product being sold.“- Andrew Lewis. 26 August 2010.

The digital world makes it so simple and natural to connect with friends and families to share information, photos, etc. Sharing special moments online to allow your family and friends far and wide to feel connected and involved is now normalised. Milestones such as your baby’s first steps or first smiles, engagements and weddings can be found across the various social media platforms.

How many of us have a Facebook or Instagram account? In case you have been living under a rock, you would know that both are owned by the same company, Meta, along with the very popular Whatsapp. But how many of us have read the entire terms and conditions of use?

Back in 2020, Visual Capitalist posted an article on how long it would take to read the terms of service (TOS) agreements of fourteen popular online services. Assuming a reading speed of 240 words per minute, it would take 17 minutes and 12 seconds to read (as at 18 April 2020) what would be largely convoluted legalese.

Privacy, genAI, you and social media

 

Ever evolving TOS and their implications

Facebook’s TOS never seems to be out of the limelight over the years. An update in 2009 provoked outrage when TOS permissions changed to allow Facebook the following:

  • Retention of all rights to your stuff after you remove it from Facebook, and even after you delete your Facebook account.
  • If you put the Facebook share widget anywhere for example on your own blog, they claimed they can do whatever they want with your content.
  • The jury was out on whether you still own your own content given Facebook was silent on the matter.

Fast forward to 2024. Meta’s new AI Terms of Service permits them to use your social activities on the platforms to train their AI tools. “If the product is free, then you are the product.

Europe, with their strong privacy protections in the form of General Data Protection Regulation (GDPR), is the only jurisdiction where you can opt out. There is no equivalent protection for Australians.

Now pause for a moment and consider the implications for ourselves, and in particular to our children, who didn’t get a say in the posting of their photos by their parents. Not one to miss an opportunity, the TOS is also retrospective; data as far back as 2007 can be used by Meta for their AI boot camp.

With some of these children now coming of age and entering the workforce, where savvy potential employers will look for social media footprints of prospective hires, the potential for misuse of personal data is extremely high.

 

What is genAI?

Generative AI (genAI) refers to any artificial intelligence model that generates new content, which can include both images and text, as well as other forms of media such as audio and video. Thanks ChatGPT for that explanation.

At this stage it is important to clarify what the latest AI craze is but at the same time there is really no simple way to explain it. So to put fearmongering aside, Skynet is not coming for us, and the internet is not becoming sentient. Machines are not unilaterally making decisions that will affect the future of mankind.

It is important to understand that genAI is a different beast to Traditional AI (TA) which has been around for quite a while. TA mimics the cognitive functions associated with the human mind like learning and problem-solving. It relies on human generated rules to perform specific tasks, and therefore unsuited for creativity, unconventional thinking or novel solutions.

What is available as genAI today focuses on understanding and processing data to create new content, it aims to capture the essence of human-created content. The accuracy and effectiveness of genAI is utterly dependent on the quality and quantity of data used for its training.

I am oversimplifying the details here, but what I am trying to explain is genAI is not creating new things out of thin air. The output is based on analysed and identified patterns in data that it was trained on. It does not differentiate between right and wrong, whether the data sets it relies on are inherently factual or misrepresented.

But genAI can be trained to improve and add “skills” by training on more content created by real people. And this is the heart of the matter; Meta services do not provide Australians with capacity to opt out and protect our intellectual property and likenesses. While I am not going to delve into other platforms such as TikTok, SnapChat and the like, I would summarise that Australia does not have the strong privacy laws that exists in the EU.

 

The potential for abuse

The rise of AI and the potential for misuse is not new. Ignoring the Hollywood scripts (Hi Sonny, T-800), the doom and gloom at present time is not the AI of “Terminator”, or “I, Robot” fame.

To me the most concerning aspect of genAI are the users, and oh, corporate managers.

If you think back to Hollywood and the Writers Guild of America stand off just twelve months ago, the underpinning fear was the ease of the expectation genAI will replace workers and their originality, acting as a loophole to gut wages and human value.

Or the more nefarious scenario of image manipulation such as the Bacchus Marsh Grammar incident in June 2024. In this case, students and teachers have had images taken from their social media accounts and manipulated into deepfake nudes using genAI.

It gets worse.  A few months ago, I was listening to the “Hunting Warhead”, an investigative podcast by CBC which dives into Taskforce Argos. It involves minors in very disturbing scenarios. The rise of genAI is without doubt being abused to contribute into this area.

Again, I need to stress that genAI relies on existing datasets. It is not _creatio ex nihilo_ – creation out of nothing. But genAI can certainly and easily take available content in directions never intended by the original content creator and not always for the greater good.

Before you say, but there are terms of service governing the use genAi products from sites like DALL-E3, Midjourney, Stable Diffusion, FLUX.1 for example. Spinning up a self-hosted genAI instance is not as hard as you think as long as you have the hardware to cater for your LLM (Large Language Model).

 

What alternatives are there?

The internet is dominated by a handful of providers. Challengers to Rome have tried to lay siege but failed.

Tinybeans is a private photo-sharing app that provides a safe space for families to share their special moments and to help parents navigate the challenges of online sharing while looking after their little ones’ privacy and safety. Rather than a laying siege mentality, the strategy is to carve out a niche to co-exist.

In a recent survey of users, Tinybeans found that 81% of respondents perceive them as the safest online platform due to its robust privacy and security protections. A striking 90% of users said that privacy is the primary reason for choosing the app, citing the critical importance of keeping family moments private. In comparison, only about 8% of users trust photo sharing apps like Instagram for these purposes. The app has over 900,000 active monthly users and recently reported a record quarterly revenue in Q4 of US$1.52m.

DRN was invited to pose some questions to Zsofi Paterson, the CEO of Tinybeans. Incidentally Zsofi is a local Aussie success story based out of her home in Sydney’s northern beaches.

 

The QandA with Zsofi Paterson, CEO of Tinybeans

DRN: How AI can be used to misuse images of children and how to protect your children moving forward?

ZP: AI technology, with its advanced capabilities in image recognition, data mining, and deep learning, poses significant risks to privacy, especially for children’s images. AI algorithms can easily scan social media and other online platforms to identify, collect, and analyse children’s photos. These images can then be used to create detailed profiles, track activities, and even predict behaviours. In some cases, AI can facilitate the manipulation of images, leading to deep fakes or other malicious uses. This advanced technology heightens the potential for identity theft, cyberbullying, and exploitation, making it crucial for parents to be vigilant about their children’s online presence whether they are the one managing the account or their children are.

DRN: How do we manage the group of children aged between 13 to 16 years old that already have social media?

ZP: It’s all about open lines of communication with your teenagers and leading by example. I think the 36 months campaign is a great thing to protect children from the harms of social media. By reducing the age kids can use social media, we give them another 3 years to develop both cognitively and emotionally. I think parents also need to lead by example, by not overly sharing their children’s images and videos on their own public social media accounts. As a mother myself, I am genuinely concerned about the dangers that social media can pose to young minds, particularly in terms of the misuse of their images through AI technology and cyber-bullying which is sadly far too common in the online world.

DRN: Online team games such as Fortnight were both a saviour and a curse during the great Covid lockdowns; for this generation is the genie already out of the bottle?

ZP: The pandemic certainly accelerated the integration of online gaming, like Fortnite, into the daily lives of young people. For many, these games were a vital means of connection during a time of unprecedented isolation, offering social interaction, entertainment, and a sense of normalcy. However, this increased engagement also highlighted the challenges associated with balancing screen time, online safety, and the potential for addictive behaviours.

In many ways, the “genie is out of the bottle” when it comes to online gaming for this generation. The social and technological shifts that occurred during the lockdowns have cemented these games as a significant part of their culture and social fabric. However, this doesn’t mean that we are powerless to manage or mitigate the effects.

The key moving forward is to promote balance and responsible gaming habits. Just as with social media, it’s important for parents, educators, and the gaming industry to work together to create environments that encourage healthy engagement. This could involve setting time limits, promoting alternative offline activities and educating young people about the potential risks of excessive gaming.

By fostering open dialogue between parents and children, we can better understand their experiences and help them navigate the digital world in a way that enhances their well-being.

DRN: Services such as Discord have a minimum age of 13 years old, but they are regularly used by Year 7 students (under 13) to communicate both academically and socially.

ZP: The reality that many under-13 students are using platforms like Discord, despite age restrictions, highlights a significant challenge in the digital landscape. While these platforms can offer valuable avenues for academic collaboration and social interaction, their use by younger students poses potential risks that need to be addressed.

One of the most effective strategies is to focus on digital literacy education, both for students and parents. Teaching young people about the importance of online safety, privacy, and responsible behaviour is crucial. Schools and parents can collaborate to provide guidance on how to use these platforms safely, helping students understand the implications of their online actions and interactions.

Parents play a critical role in monitoring their children’s online activities. Encouraging open communication about which platforms their children are using and why can help parents stay informed. Tools like parental controls and monitoring software can also be useful, but they should be used in a way that fosters trust rather than feeling invasive.

Platforms like Discord need to be more proactive in enforcing their age restrictions and providing tools that help parents and educators monitor usage. This could include more robust age verification processes, as well as features that allow for supervised accounts or restricted access for younger users.

DRN: Is the proposed raising of age restrictions creating a new range of problems? Arguably it’s both sides of the coin for mental health benefits (such as for introverts, neuro-divergent children) but opens participants to the possibility of 24/7 cyber bullying.

ZP: The online world can be a means for kids to connect with other kids, particularly for those who might not feel confident to do so in a physical setting. For introverted or neuro-divergent children, online platforms often provide a much-needed space where they can socialise without the pressures of face-to-face interaction. These environments can be particularly beneficial, allowing them to connect with like-minded peers, build confidence, and express themselves in ways that might be difficult offline. Raising the minimum age could cut off these vital avenues of connection for children who might otherwise struggle to engage socially.

However, I don’t think we should discount the impacts of cyberbullying that’s currently happening to our kids, which has incredible implications for their mental health. You only need to look at the increased rates of anxiety, depression and most tragically, suicide in young people to see the impact this is having on teenagers.

Younger children often lack the maturity to navigate complex social dynamics online, making them more vulnerable to manipulation, peer pressure, and negative influences. Raising the age could provide more time for children to develop the social and emotional skills needed to engage safely and responsibly in these environments. This is why the 36 months between 13 and 16 years of age is such a crucial difference in the development of young people and whether they should have access to public social media platforms.

DRN: How do we protect children’s privacy online?

ZP: In today’s digital world, parents sharing milestones such as their baby’s first steps or first smiles can feel like a natural way to connect with friends, family and their trusted circle. However, as new parents, it’s crucial to consider the implications of sharing these moments online, particularly when it comes to the baby’s privacy and safety. As the leading private photo-sharing app, Tinybeans provides a safe space for families to share these moments and we want to help parents navigate the challenges of online sharing while looking after their little ones’ privacy and safety.

  • Limited sharing: For expecting new parents, it’s crucial to be cautious about sharing your baby’s personal details, such as birth dates or location, to prevent any misuse of this information. This is where Tinybeans offers a great solution so parents can share cherished moments with their inner circle in a safe and private way.
  • Control privacy settings: To protect your baby’s privacy, use private sharing options on social media to control who can view your photos and videos. Alternatively, consider using a dedicated platform like Tinybeans to safely share updates with your close circle.
  • Disable chat options: For popular apps and games, turn off the chat option to prevent unwanted interactions with strangers.
  • Restrict search engine use: Removing or turning off search engines on devices used by young children can prevent them from stumbling upon inappropriate content.
  • Use parental controls: employ parental controls and safe search tools on devices to keep online experiences positive and secure. These tools filter out inappropriate content and protect children’s browsing experiences.
  • Be mindful of digital footprint: consider that children are too young to give informed consent about their digital presence. Avoid sharing too many details or photos that could leave a lasting digital trace. Prioritise their right to privacy and be thoughtful about what it is shared on their behalf.

 

A Word from the CEO of Tinybeans

As the CEO of Tinybeans, an app that offers a safe and secure environment for parents to share images of their children with their inner circle, I certainly believe we should err on the side of caution and raise the age limit of when children can use social media. It’s essential that parents lead by example. By choosing a platform that champions privacy and fosters meaningful connections, parents demonstrate to their children the value of their digital footprints. This prepares the next generation to navigate their social media experiences wisely, respecting privacy and embracing secure online interactions.

 

Tinybeans as a Service

After pointing the finger at other people’s Terms of Service, I figured I should have a look at Tinybeans’ rather than taking their word at it. While I am not legally trained, Zsofi, however, holds a Bachelor of Laws (Hons) and started her career as a lawyer.

Starting with the Privacy Policy, there are all the usual terms for a service that require a subscription and subscribers are required to provide information for access to the service including payment gateways.

What is written in bold is “[Tinybeans] will not sell, share or trade the personal information you submit to any unrelated third parties, mass marketers, or to other non-affiliated entities for their marketing purposes without your consent.

Tinybeans notes that they do not respond to or honor ‘do not track’ (a/k/a DNT) signals or similar mechanisms transmitted by web browsers.

Moving onto the Terms of Service. The notable parts are:

[3] you are hereby granted a non-exclusive, limited, non-transferable, freely revocable license to use the Service for your personal, noncommercial use only and as permitted by the features of the Service.

[9] You retain your rights to any content or information you submit, post, or display on or through the Service (“User Content”) and Tinybeans does not claim ownership of any of your User Content. Instead, for good and valuable consideration, including the right to use the Service, the receipt and sufficiency of which is hereby acknowledged, you hereby grant to Tinybeans, and their respective parents, subsidiaries and affiliate companies, successors, assigns, designees, agents and employees the non-exclusive, fully paid, sub-licensable, assignable, royalty-free right and license to use your User Content, in any and all media now known or hereafter developed, throughout the world, subject to our Privacy Policy, available here https://tinybeans.com/privacy. You can choose who can view your User Content and activities, including your albums, as described in the Privacy Policy. This license for such limited purposes continues even after you stop using the Service, with respect to aggregate and de-identified data derived from User Content and any residual backup copies of User Content made in the ordinary course of business. Use of User Content and activities by Tinybeans is governed by our Privacy Policy.

[10] Tinybeans is the owner of or otherwise licensed to use all parts of the Service, including all copy, software, graphics, designs, photographs, and all copyrights, trademarks, service marks, trade names, logos, and other intellectual property or proprietary rights contained therein. Some materials on the Service may belong to third parties who have authorized Tinybeans to display the materials, such as other User Content or User Submissions, or other such proprietary materials. By using the Service, you agree not to copy, distribute, modify or make derivative works of any materials without the prior written consent of the owner of such materials. You may not access or use the Service, or any portion of it, for any purpose other than to view the Tinybeans Content and make personal use of the services provided in accordance with these Terms and Conditions.

The elephant in the room from my layman’s read of the legalese is that, whilst as a user you are expressly agreeing to not “modify or make derivative works of any materials“, no such clause binds Tinybeans to the same.

Considering in clause 9, by using the service “you hereby grant to Tinybeans, and their respective parents, subsidiaries and affiliate companies, successors, assigns, designees, agents and employees the non-exclusive, fully paid, sub-licensable, assignable, royalty-free right and license to use your User Content, in any and all media now known or hereafter developed, throughout the world“.

The TOS is silent on the matter of derivatives of your content by Tinybeans and their respective parents, subsidiaries and affiliate companies, successors, assigns, designees, agents and employees. It is also silent on the matter of using machine learning on user content. Which to my layman’s reading of it, means it is left entirely open should Tinybeans decide that it is a valuable growth strategy.

I sought clarifications from Zsofi in regards to these clauses and potential for concern. Her response is posted below verbatim.

The privacy and security of our users’ memories and data are the core pillars of Tinybeans. We are committed to ensuring that all content shared on our platform is treated with the utmost respect and care.

Tinybeans does not create derivative works from user content. Any use of user content remains governed by our Privacy Policy, which outlines how we respect and protect users’ privacy, including their content.

As you noted, the Terms of Service grant Tinybeans a non-exclusive, fully-paid license to use user content across various media. This is primarily to ensure that we can deliver and enhance the services our users value, such as sharing family memories, creating photo books, and providing curated content experiences for the users.

Tinybeans does not use user content for machine learning purposes or to create derivative works. Our Privacy Policy is designed to safeguard user content and provide transparency on how content is utilised. Should there be any future plans to explore new technologies like machine learning, we would update our policies and provide clear communication to our users, ensuring they understand and consent to how their data is being used.

We are committed to maintaining the trust of our users and ensuring that all use of content is aligned with our core values of privacy, security, joy and connection.

 

What’s Kevin’s take?

This topic is vast and difficult to tackle. When I started writing the opening to this editorial, I left a very open framework to collate my thoughts and research. There are many materials and arguments to read and consider.

Along the way I got too deep into the proposal of social media ban for children by the Australian government. This topic is both relevant and adjacent to the contents of this editorial, and I will address it separately at a later date.

The Meta erosion of user rights and privacy is less of a creep and more of an avalanche. To be factual, the rights grab happened over a decade ago and yet it still has 3.07 billion monthly active users in 2024. The world has 8.2 billion people. I will let that sink in.

The underpinning concept of Tinybeans is to offer a platform where user content and ownership is protected. A place where you can safely share moments of your life and expect it to remain between yourself and the selected group of people you share it with, and not being mined for content and used for genAI training, or whatever the next wave of technology needs from our information.

As Zsofi said in the QandA, “AI technology, with its advanced capabilities in image recognition, data mining, and deep learning, poses significant risks to privacy, especially for children’s images. AI algorithms can easily scan social media and other online platforms to identify, collect, and analyse children’s photos. These images can then be used to create detailed profiles, track activities, and even predict behaviours.

 

Last words

Circling back to my opening paragraph, the first clear indication of the aphorism that the audience of mass media is the product, not the customer, seems to date back to a 7-minute 1973 movie by Richard Serra and Carlota Fay Schoolman called “Television Delivers People“. The rationale remains true and even more relevant today.

The challenges of protecting ourselves on the internet is becoming more and multi-dimensional and difficult. With the advent of genAI and the insatiable need to train it with real world data, we stand to lose control of all that is precious and important to us.

As users we all need to pay more attention to what is happening with our data and understand what rights we are signing away by using Internet services, particularly services which are free but not limited to social media platforms. If you need a wake-up call on what blind acceptance of a TOS can mean, just Google “Disney Plus contract controversy”.

Thankfully there is a multitude of apps which are more privacy focused and transparent with their Terms of Service, and importantly not affiliated with the more well-known names. This latest move by Meta should serve as a wake up call for everyone.

DRN would like to thank Zsofi Paterson, CEO of Tinybeans for sharing her time and insights for this editorial.

 


Discover more from Digital Reviews Network

Subscribe to get the latest posts sent to your email.