Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
20 commits
Select commit Hold shift + click to select a range
8feb8d0
A straightforward implementation of extending configurable stoage quo…
landreev Nov 20, 2025
05ad1a9
Adding utility methods to the RestAssured suite for the dataset-level…
landreev Nov 20, 2025
19cfbe9
Adding RestAssured tests, the Guide entries and a short release note.…
landreev Nov 20, 2025
87de3e7
Switching the permission required for viewing current storageuse; it …
landreev Nov 20, 2025
90395fd
Merge branch 'develop' into 11987-storage-quotas-on-datasets
landreev Nov 21, 2025
388af25
a typo in the release note. #11987
landreev Nov 21, 2025
4337c47
Added an optional parameter to the quota APIs to show the quota that …
landreev Nov 21, 2025
43678bb
Cosmetic, fixes the comments in the command #11987
landreev Nov 21, 2025
4c61b5e
Fixed the required permissions in the API guide. #11987
landreev Nov 22, 2025
764ba9a
A quick experiment - add the remaining storage quota and/or file coun…
landreev Nov 24, 2025
cc9f9a2
cleanup per review #11987
landreev Nov 24, 2025
16dbd5d
Merge branch 'develop' into 11987-storage-quotas-on-datasets
landreev Nov 25, 2025
ac5620b
Extra documentation. #11987
landreev Nov 25, 2025
cbc098c
typo in the guide #11987
landreev Nov 25, 2025
1fbbff0
moving the dynamic, remaining upload allocations from /storageDriver …
landreev Dec 2, 2025
b832a2f
doc changes #11987
landreev Dec 3, 2025
2c7f928
Merge branch 'develop' into 11987-storage-quotas-on-datasets
landreev Dec 3, 2025
92141d4
Some refactoring per suggestions during QA #11987
landreev Dec 4, 2025
aa84df1
These 2 lines were a leftover of an earlier experiment, removed. #11987
landreev Dec 4, 2025
4c7b190
corrected an error in the guide. #11987
landreev Dec 4, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions doc/release-notes/11987-storage-quotas-on-datasets.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
It is now possible to define storage quotas on individual datasets. See the API guide for more information.
The practical use case is for datasets in the top-level, root collection. This does not address the use case of a user creating multiple datasets. But there is an open dev. issue for adding per-user storage quotas as well.

A convenience API `/api/datasets/{id}/uploadlimits` has been added to show the remaining storage and/or number of files quotas, if present.
2 changes: 2 additions & 0 deletions doc/sphinx-guides/source/admin/dataverses-datasets.rst
Original file line number Diff line number Diff line change
Expand Up @@ -271,6 +271,8 @@ The effective store can be seen using::

curl http://$SERVER/api/datasets/$dataset-id/storageDriver

The output of the API will include the id, label, type (for example, "file" or "s3") as well as the support for direct download and upload.

To remove an assigned store, and allow the dataset to inherit the store from it's parent collection, use the following (only a superuser can do this) ::

curl -H "X-Dataverse-key: $API_TOKEN" -X DELETE http://$SERVER/api/datasets/$dataset-id/storageDriver
Expand Down
77 changes: 72 additions & 5 deletions doc/sphinx-guides/source/api/native-api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1250,16 +1250,22 @@ Collection Storage Quotas

curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/quota"

Will output the storage quota allocated (in bytes), or a message indicating that the quota is not defined for the specific collection. The user identified by the API token must have the ``Manage`` permission on the collection.
Will output the storage quota allocated (in bytes), or a message indicating that the quota is not defined for the collection. If this is an unpublished collection, the user must have the ``ViewUnpublishedDataverse`` permission.
With an optional query parameter ``showInherited=true`` it will show the applicable quota potentially defined on the nearest parent when the collection does not have a quota configured directly.

.. code-block::

curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/use"

Will output the dynamically cached total storage size (in bytes) used by the collection. The user identified by the API token must have the ``Edit`` permission on the collection.

To set or change the storage allocation quota for a collection:

.. code-block::

curl -X POST -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/quota/$SIZE_IN_BYTES"
curl -X PUT -H "X-Dataverse-key:$API_TOKEN" -d $SIZE_IN_BYTES "$SERVER_URL/api/dataverses/$ID/storage/quota"

This is API is superuser-only.
This API is superuser-only.


To delete a storage quota configured for a collection:
Expand All @@ -1268,9 +1274,70 @@ To delete a storage quota configured for a collection:

curl -X DELETE -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/dataverses/$ID/storage/quota"

This is API is superuser-only.
This API is superuser-only.

Storage Quotas on Individual Datasets
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. code-block::

curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/datasets/$ID/storage/quota"

Will output the storage quota allocated (in bytes), or a message indicating that the quota is not defined for this dataset. If this is an unpublished dataset, the user must have the ``ViewUnpublishedDataset`` permission.
With an optional query parameter ``showInherited=true`` it will show the applicable quota potentially defined on the nearest parent collection when the dataset does not have a quota configured directly.

.. code-block::

curl -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/datasets/$ID/storage/use"

Will output the dynamically cached total storage size (in bytes) used by the dataset. The user identified by the API token must have the ``Edit`` permission on the dataset.

To set or change the storage allocation quota for a dataset:

.. code-block::

curl -X PUT -H "X-Dataverse-key:$API_TOKEN" -d $SIZE_IN_BYTES "$SERVER_URL/api/datasets/$ID/storage/quota"

This API is superuser-only.


To delete a storage quota configured for a dataset:

.. code-block::

curl -X DELETE -H "X-Dataverse-key:$API_TOKEN" "$SERVER_URL/api/datasets/$ID/storage/quota"

This API is superuser-only.

The following convenience API shows the dynamic values of the *remaining* storage size and/or file number quotas on the dataset, if present. For example:

.. code-block::

curl -H "X-Dataverse-key: $API_TOKEN" "http://localhost:8080/api/datasets/$dataset-id/uploadlimits"
{
"status": "OK",
"data": {
"uploadLimits": {
"numberOfFilesRemaining": 20,
"storageQuotaRemaining": 1048576
}
}
}

Or, when neither limit is present:

.. code-block::

{
"status": "OK",
"data": {
"uploadLimits": {}
}
}

This API requires the Edit permission on the dataset.

Use the ``/settings`` API to enable or disable the enforcement of storage quotas that are defined across the instance via the following setting. For example,
Use the ``/settings`` API to enable or disable the enforcement of storage quotas that are defined across the instance via the following setting:

.. code-block::

Expand Down
23 changes: 22 additions & 1 deletion src/main/java/edu/harvard/iq/dataverse/DatasetServiceBean.java
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@
import edu.harvard.iq.dataverse.search.IndexServiceBean;
import edu.harvard.iq.dataverse.settings.FeatureFlags;
import edu.harvard.iq.dataverse.settings.SettingsServiceBean;
import edu.harvard.iq.dataverse.storageuse.StorageQuota;
import edu.harvard.iq.dataverse.util.BundleUtil;
import edu.harvard.iq.dataverse.util.SystemConfig;
import edu.harvard.iq.dataverse.workflows.WorkflowComment;
Expand Down Expand Up @@ -1118,5 +1119,25 @@ public int getDataFileCountByOwner(long id) {
Long c = em.createNamedQuery("Dataset.countFilesByOwnerId", Long.class).setParameter("ownerId", id).getSingleResult();
return c.intValue(); // ignoring the truncation since the number should never be too large
}


/**
*
* @todo: consider moving the quota method, from here and the DataverseServiceBean,
* to DvObjectServiceBean.
*/
public void saveStorageQuota(Dataset target, Long allocation) {
StorageQuota storageQuota = target.getStorageQuota();

if (storageQuota != null) {
storageQuota.setAllocation(allocation);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any check to prevent negative numbers? I guess 0 or negative will just block the ability to store more data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, that is the answer - it is possible to set it to a negative number. But it will be equivalent to zero in practice.
Tbh, I am generally less concerned about preventing invalid or meaningless values from being entered when it comes to superusers-only APIs. Under the assumption that they should know what they are doing, and/or can be expected to own the consequences of their actions.

em.merge(storageQuota);
} else {
storageQuota = new StorageQuota();
storageQuota.setDefinitionPoint(target);
storageQuota.setAllocation(allocation);
target.setStorageQuota(storageQuota);
em.persist(storageQuota);
}
em.flush();
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -1307,7 +1307,12 @@ static String getBaseSchemaStringFromFile(String pathToJsonFile) {
" },\n" +
" \"required\": [\"datasetVersion\"]\n" +
"}\n";


/**
*
* @todo: consider moving these quota methods, and the DatasetServiceBean
* equivalent to DvObjectServiceBean.
*/
public void saveStorageQuota(Dataverse target, Long allocation) {
StorageQuota storageQuota = target.getStorageQuota();

Expand Down
114 changes: 114 additions & 0 deletions src/main/java/edu/harvard/iq/dataverse/api/Datasets.java
Original file line number Diff line number Diff line change
Expand Up @@ -6158,4 +6158,118 @@ public Response updateLicense(@Context ContainerRequestContext crc,
}
}, getRequestUser(crc));
}

/**
* Storage quotas and use. Note that these methods replicate the
* collection-level equivalents 1:1. Both the quotas and the system for
* caching the size of the storage in use are implemented on
* DvObjectContainers internally and therefore work identically in both
* cases.
*/

@GET
@AuthRequired
@Path("{identifier}/storage/quota")
public Response getDatasetQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, @QueryParam("showInherited") boolean showInherited) throws WrappedResponse {
try {
Long bytesAllocated = execCommand(new GetDatasetQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDatasetOrDie(dvIdtf), showInherited));
if (bytesAllocated != null) {
return ok(MessageFormat.format(BundleUtil.getStringFromBundle("dataset.storage.quota.allocation"),bytesAllocated));
}
return ok(BundleUtil.getStringFromBundle("dataset.storage.quota.notdefined"));
} catch (WrappedResponse ex) {
return ex.getResponse();
}
}

@PUT
@AuthRequired
@Path("{identifier}/storage/quota")
public Response setDatasetQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, String value) throws WrappedResponse {
try {
Long bytesAllocated;
try {
bytesAllocated = Long.parseLong(value);
} catch (NumberFormatException nfe){
return error(Status.BAD_REQUEST, value + " is not a valid number of bytes");
}
execCommand(new SetDatasetQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDatasetOrDie(dvIdtf), bytesAllocated));
return ok(BundleUtil.getStringFromBundle("dataset.storage.quota.updated"));
} catch (WrappedResponse ex) {
return ex.getResponse();
}
}

@DELETE
@AuthRequired
@Path("{identifier}/storage/quota")
public Response deleteDatasetQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf) throws WrappedResponse {
try {
execCommand(new DeleteDatasetQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDatasetOrDie(dvIdtf)));
return ok(BundleUtil.getStringFromBundle("dataset.storage.quota.deleted"));
} catch (WrappedResponse ex) {
return ex.getResponse();
}
}

/**
*
* @param crc
* @param identifier
* @return
* @throws edu.harvard.iq.dataverse.api.AbstractApiBean.WrappedResponse
* @todo: add an optional parameter that would force the recorded storage use
* to be recalculated (or should that be a POST version of this API?)
*/
@GET
@AuthRequired
@Path("{identifier}/storage/use")
public Response getDatasetStorageUse(@Context ContainerRequestContext crc, @PathParam("identifier") String identifier) throws WrappedResponse {
return response(req -> ok(MessageFormat.format(BundleUtil.getStringFromBundle("dataset.storage.use"),
execCommand(new GetDatasetStorageUseCommand(req, findDatasetOrDie(identifier))))), getRequestUser(crc));
}

@GET
@AuthRequired
@Path("{identifier}/uploadlimits")
public Response getUploadLimits(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf,
@Context UriInfo uriInfo,
@Context HttpHeaders headers) throws WrappedResponse {

Dataset dataset;

try {
dataset = findDatasetOrDie(dvIdtf);
} catch (WrappedResponse ex) {
return error(Response.Status.NOT_FOUND, "No such dataset");
}

AuthenticatedUser user;
try {
user = getRequestAuthenticatedUserOrDie(crc);
} catch (WrappedResponse ex) {
return error(Response.Status.BAD_REQUEST, "This API call requires authentication.");
}
if (!permissionSvc.requestOn(createDataverseRequest(user), dataset).has(Permission.EditDataset)) {
return error(Response.Status.FORBIDDEN, "This API call requires EditDataset permission.");
}

JsonObjectBuilder limits = new NullSafeJsonBuilder();

// Add optional elements - storage size and file count limits, if present:
if (systemConfig.isStorageQuotasEnforced()) {
UploadSessionQuotaLimit uploadSessionQuota = fileService.getUploadSessionQuotaLimit(dataset);
if (uploadSessionQuota != null) {
limits.add("storageQuotaRemaining", uploadSessionQuota.getRemainingQuotaInBytes());
}
}

Integer effectiveFileCountLimit = dataset.getEffectiveDatasetFileCountLimit();

if (effectiveFileCountLimit != null) {
limits.add("numberOfFilesRemaining", effectiveFileCountLimit - datasetService.getDataFileCountByOwner(dataset.getId()));
}

return ok(new NullSafeJsonBuilder().add("uploadLimits", limits));
}
}
16 changes: 11 additions & 5 deletions src/main/java/edu/harvard/iq/dataverse/api/Dataverses.java
Original file line number Diff line number Diff line change
Expand Up @@ -1236,9 +1236,9 @@ public Response getStorageSize(@Context ContainerRequestContext crc, @PathParam(
@GET
@AuthRequired
@Path("{identifier}/storage/quota")
public Response getCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf) throws WrappedResponse {
public Response getCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, @QueryParam("showInherited") boolean showInherited) throws WrappedResponse {
try {
Long bytesAllocated = execCommand(new GetCollectionQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDataverseOrDie(dvIdtf)));
Long bytesAllocated = execCommand(new GetCollectionQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDataverseOrDie(dvIdtf), showInherited));
if (bytesAllocated != null) {
return ok(MessageFormat.format(BundleUtil.getStringFromBundle("dataverse.storage.quota.allocation"),bytesAllocated));
}
Expand All @@ -1248,11 +1248,17 @@ public Response getCollectionQuota(@Context ContainerRequestContext crc, @PathPa
}
}

@POST
@PUT
@AuthRequired
@Path("{identifier}/storage/quota/{bytesAllocated}")
public Response setCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, @PathParam("bytesAllocated") Long bytesAllocated) throws WrappedResponse {
@Path("{identifier}/storage/quota")
public Response setCollectionQuota(@Context ContainerRequestContext crc, @PathParam("identifier") String dvIdtf, String value) throws WrappedResponse {
try {
Long bytesAllocated;
try {
bytesAllocated = Long.parseLong(value);
} catch (NumberFormatException nfe){
return error(Status.BAD_REQUEST, value + " is not a valid number of bytes");
}
execCommand(new SetCollectionQuotaCommand(createDataverseRequest(getRequestUser(crc)), findDataverseOrDie(dvIdtf), bytesAllocated));
return ok(BundleUtil.getStringFromBundle("dataverse.storage.quota.updated"));
} catch (WrappedResponse ex) {
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
package edu.harvard.iq.dataverse.engine.command.impl;

import edu.harvard.iq.dataverse.Dataset;
import edu.harvard.iq.dataverse.authorization.users.AuthenticatedUser;
import edu.harvard.iq.dataverse.engine.command.AbstractVoidCommand;
import edu.harvard.iq.dataverse.engine.command.CommandContext;
import edu.harvard.iq.dataverse.engine.command.DataverseRequest;
import edu.harvard.iq.dataverse.engine.command.RequiredPermissions;
import edu.harvard.iq.dataverse.engine.command.exception.CommandException;
import edu.harvard.iq.dataverse.engine.command.exception.IllegalCommandException;
import edu.harvard.iq.dataverse.engine.command.exception.PermissionException;
import edu.harvard.iq.dataverse.storageuse.StorageQuota;
import edu.harvard.iq.dataverse.util.BundleUtil;
import java.util.logging.Logger;

/**
*
* @author landreev
*
* A superuser-only command:
*/
@RequiredPermissions({})
public class DeleteDatasetQuotaCommand extends AbstractVoidCommand {

private static final Logger logger = Logger.getLogger(DeleteDatasetQuotaCommand.class.getCanonicalName());

private final Dataset targetDataset;

public DeleteDatasetQuotaCommand(DataverseRequest aRequest, Dataset target) {
super(aRequest, target);
targetDataset = target;
}

@Override
public void executeImpl(CommandContext ctxt) throws CommandException {
// first check if user is a superuser
if ( (!(getUser() instanceof AuthenticatedUser) || !getUser().isSuperuser() ) ) {
throw new PermissionException(BundleUtil.getStringFromBundle("dataset.storage.quota.superusersonly"),
this, null, targetDataset);
}

if (targetDataset == null) {
throw new IllegalCommandException("", this);
}

StorageQuota storageQuota = targetDataset.getStorageQuota();

if (storageQuota != null && storageQuota.getAllocation() != null) {
// The method below, in dataverseServiceBean, can be used to delete
// quotas defined on either of the DvObjectContainer classes:
ctxt.dataverses().disableStorageQuota(storageQuota);
}
// ... and if no quota was enabled on the dataset - nothing to do = success
}
}
Loading