Java 8 has reached end of support
and will be
deprecated
on January 31, 2026. After deprecation, you won't be able to deploy Java 8
applications, even if your organization previously used an organization policy to
re-enable deployments of legacy runtimes. Your existing Java
8 applications will continue to run and receive traffic after their
deprecation date. We recommend that
you
migrate to the latest supported version of Java.
Structuring Data for Strong Consistency
Stay organized with collections
Save and categorize content based on your preferences.
Datastore provides high availability, scalability and durability by
distributing data over many machines and using synchronous
replication over a wide geographic area. However, there is a tradeoff in this
design, which is that the write throughput for any single
entity group is limited to about
one commit per second, and there are limitations on queries or transactions that
span multiple entity groups. This page describes these limitations in more
detail and discusses best practices for structuring your data to support strong
consistency while still meeting your application's write throughput
requirements.
Strongly-consistent reads always return current data, and, if performed within a
transaction, will appear to come from a single, consistent snapshot. However,
queries must specify an ancestor filter in order to be strongly-consistent or
participate in a transaction, and transactions can involve at most 25 entity
groups. Eventually-consistent reads do not have those limitations, and are
adequate in many cases. Using eventually-consistent reads can allow you to
distribute your data among a larger number of entity groups, enabling you to
obtain greater write throughput by executing commits in parallel on the
different entity groups. But, you need to understand the characteristics of
eventually-consistent reads in order to determine whether they are suitable for
your application:
- The results from these reads might not reflect the latest transactions. This
can occur because these reads do not ensure that the replica they are running
on is up-to-date. Instead, they use whatever data is available on that replica
at the time of query execution. Replication latency is almost always less than
a few seconds.
- A committed transaction that spanned multiple entities might appear to have
been applied to some of the entities and not others. Note, though, that a
transaction will never appear to have been partially applied within a single
entity.
- The query results can include entities that should not have been included
according to the filter criteria, and might exclude entities that should have
been included. This can occur because indexes might be read at a different
version than the entity itself is read at.
To understand how to structure your data for strong consistency, compare two
different approaches for a simple guestbook application. The first approach
creates a new root entity for each entity that is created:
It then queries on the entity kind Greeting
for the ten most recent greetings.
However, because you are using a non-ancestor query, the replica used to perform
the query in this scheme might not have seen the new greeting by the time the
query is executed. Nonetheless, nearly all writes will be available for
non-ancestor queries within a few seconds of commit. For many applications, a
solution that provides the results of a non-ancestor query in the context of the
current user's own changes will usually be sufficient to make such replication
latencies completely acceptable.
If strong consistency is important to your application, an alternate approach is
to write entities with an ancestor path that identifies the same root entity
across all entities that must be read in a single, strongly-consistent ancestor
query:
You will then be able to perform a strongly-consistent ancestor query within the
entity group identified by the common root entity:
This approach achieves strong consistency by writing to a single entity group
per guestbook, but it also limits changes to the guestbook to no more than
1 write per second (the supported limit for entity groups). If your application
is likely to encounter heavier write usage, you might need to consider using
other means: for example, you might put recent posts in a
memcache with an expiration
and display a mix of recent posts from the memcache and
Datastore, or you might cache them in a cookie, put some state
in the URL, or something else entirely. The goal is to find a caching solution
that provides the data for the current user for the period of time in which the
user is posting to your application. Remember, if you do a get, an ancestor
query, or any operation within a transaction, you will always see the most
recently written data.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-29 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-29 UTC."],[[["\u003cp\u003eThis API is compatible with first-generation runtimes and can be used when upgrading to the corresponding second-generation runtimes, with a separate migration guide available for those updating to App Engine Java 11/17.\u003c/p\u003e\n"],["\u003cp\u003eDatastore ensures high availability, scalability, and durability through data distribution and synchronous replication, but write throughput is limited to one commit per second for a single entity group, with restrictions on queries or transactions spanning multiple groups.\u003c/p\u003e\n"],["\u003cp\u003eStrongly-consistent reads provide current data and a consistent snapshot within transactions, requiring an ancestor filter for queries and limiting transactions to 25 entity groups, while eventually-consistent reads offer more flexibility but may not reflect the latest transactions.\u003c/p\u003e\n"],["\u003cp\u003eFor applications needing strong consistency, structuring data with a common ancestor path enables strongly-consistent ancestor queries, although this limits writes to one per second per entity group.\u003c/p\u003e\n"],["\u003cp\u003eFor applications with high write usage, consider using alternative solutions such as memcache or cookies to cache recent posts and display a mix of recent data alongside Datastore results.\u003c/p\u003e\n"]]],[],null,["# Structuring Data for Strong Consistency\n\n| This API is supported for first-generation runtimes and can be used when [upgrading to corresponding second-generation runtimes](/appengine/docs/standard/\n| java-gen2\n|\n| /services/access). If you are updating to the App Engine Java 11/17 runtime, refer to the [migration guide](/appengine/migration-center/standard/migrate-to-second-gen/java-differences) to learn about your migration options for legacy bundled services.\n\nDatastore provides high availability, scalability and durability by\ndistributing data over many machines and using synchronous\nreplication over a wide geographic area. However, there is a tradeoff in this\ndesign, which is that the write throughput for any single\n[*entity group*](/appengine/docs/legacy/standard/java/datastore/entities#Ancestor_paths) is limited to about\none commit per second, and there are limitations on queries or transactions that\nspan multiple entity groups. This page describes these limitations in more\ndetail and discusses best practices for structuring your data to support strong\nconsistency while still meeting your application's write throughput\nrequirements.\n\nStrongly-consistent reads always return current data, and, if performed within a\ntransaction, will appear to come from a single, consistent snapshot. However,\nqueries must specify an ancestor filter in order to be strongly-consistent or\nparticipate in a transaction, and transactions can involve at most 25 entity\ngroups. Eventually-consistent reads do not have those limitations, and are\nadequate in many cases. Using eventually-consistent reads can allow you to\ndistribute your data among a larger number of entity groups, enabling you to\nobtain greater write throughput by executing commits in parallel on the\ndifferent entity groups. But, you need to understand the characteristics of\neventually-consistent reads in order to determine whether they are suitable for\nyour application:\n\n- The results from these reads might not reflect the latest transactions. This can occur because these reads do not ensure that the replica they are running on is up-to-date. Instead, they use whatever data is available on that replica at the time of query execution. Replication latency is almost always less than a few seconds.\n- A committed transaction that spanned multiple entities might appear to have been applied to some of the entities and not others. Note, though, that a transaction will never appear to have been partially applied within a single entity.\n- The query results can include entities that should not have been included according to the filter criteria, and might exclude entities that should have been included. This can occur because indexes might be read at a different version than the entity itself is read at.\n\nTo understand how to structure your data for strong consistency, compare two\ndifferent approaches for a simple guestbook application. The first approach\ncreates a new root entity for each entity that is created: \n\n protected Entity createGreeting(\n DatastoreService datastore, User user, Date date, String content) {\n // No parent key specified, so Greeting is a root entity.\n Entity greeting = new Entity(\"Greeting\");\n greeting.setProperty(\"user\", user);\n greeting.setProperty(\"date\", date);\n greeting.setProperty(\"content\", content);\n\n datastore.put(greeting);\n return greeting;\n }\n\nIt then queries on the entity kind `Greeting` for the ten most recent greetings. \n\n protected List\u003cEntity\u003e listGreetingEntities(DatastoreService datastore) {\n Query query = new Query(\"Greeting\").addSort(\"date\", Query.SortDirection.DESCENDING);\n return datastore.prepare(query).asList(FetchOptions.Builder.withLimit(10));\n }\n\nHowever, because you are using a non-ancestor query, the replica used to perform\nthe query in this scheme might not have seen the new greeting by the time the\nquery is executed. Nonetheless, nearly all writes will be available for\nnon-ancestor queries within a few seconds of commit. For many applications, a\nsolution that provides the results of a non-ancestor query in the context of the\ncurrent user's own changes will usually be sufficient to make such replication\nlatencies completely acceptable.\n\nIf strong consistency is important to your application, an alternate approach is\nto write entities with an ancestor path that identifies the same root entity\nacross all entities that must be read in a single, strongly-consistent ancestor\nquery: \n\n protected Entity createGreeting(\n DatastoreService datastore, User user, Date date, String content) {\n // String guestbookName = \"my guestbook\"; -- Set elsewhere (injected to the constructor).\n Key guestbookKey = KeyFactory.createKey(\"Guestbook\", guestbookName);\n\n // Place greeting in the same entity group as guestbook.\n Entity greeting = new Entity(\"Greeting\", guestbookKey);\n greeting.setProperty(\"user\", user);\n greeting.setProperty(\"date\", date);\n greeting.setProperty(\"content\", content);\n\n datastore.put(greeting);\n return greeting;\n }\n\nYou will then be able to perform a strongly-consistent ancestor query within the\nentity group identified by the common root entity: \n\n protected List\u003cEntity\u003e listGreetingEntities(DatastoreService datastore) {\n Key guestbookKey = KeyFactory.createKey(\"Guestbook\", guestbookName);\n Query query =\n new Query(\"Greeting\", guestbookKey)\n .setAncestor(guestbookKey)\n .addSort(\"date\", Query.SortDirection.DESCENDING);\n return datastore.prepare(query).asList(FetchOptions.Builder.withLimit(10));\n }\n\nThis approach achieves strong consistency by writing to a single entity group\nper guestbook, but it also limits changes to the guestbook to no more than\n1 write per second (the supported limit for entity groups). If your application\nis likely to encounter heavier write usage, you might need to consider using\nother means: for example, you might put recent posts in a\n[memcache](/appengine/docs/legacy/standard/java/memcache) with an expiration\nand display a mix of recent posts from the memcache and\nDatastore, or you might cache them in a cookie, put some state\nin the URL, or something else entirely. The goal is to find a caching solution\nthat provides the data for the current user for the period of time in which the\nuser is posting to your application. Remember, if you do a get, an ancestor\nquery, or any operation within a transaction, you will always see the most\nrecently written data."]]