Reusable constraint types to use with typing.Annotated

Overview

annotated-types

CI pypi versions license

PEP-593 added typing.Annotated as a way of adding context-specific metadata to existing types, and specifies that Annotated[T, x] should be treated as T by any tool or library without special logic for x.

This package provides metadata objects which can be used to represent common constraints such as upper and lower bounds on scalar values and collection sizes, a Predicate marker for runtime checks, and non-normative descriptions of how we intend these metadata to be interpreted. In some cases, we also note alternative representations which do not require this package.

Install

pip install annotated-types

Examples

from typing import Annotated
from annotated_types import Gt, Len

class MyClass:
    age: Annotated[int, Gt(18)]                         # Valid: 19, 20, ...
                                                        # Invalid: 17, 18, "19", 19.0, ...
    factors: list[Annotated[int, Predicate(is_prime)]]  # Valid: 2, 3, 5, 7, 11, ...
                                                        # Invalid: 4, 8, -2, 5.0, "prime", ...

    my_list: Annotated[list[int], 0:10]                 # Valid: [], [10, 20, 30, 40, 50]
                                                        # Invalid: (1, 2), ["abc"], [0] * 20
    your_set: Annotated[set[int], Len(0, 10)]           # Valid: {1, 2, 3}, ...
                                                        # Invalid: "Well, you get the idea!"

Documentation

While annotated-types avoids runtime checks for performance, users should not construct invalid combinations such as MultipleOf("non-numeric") or Annotated[int, Len(3)]. Downstream implementors may choose to raise an error, emit a warning, silently ignore a metadata item, etc., if the metadata objects described below are used with an incompatible type - or for any other reason!

Gt, Ge, Lt, Le

Express inclusive and/or exclusive bounds on orderable values - which may be numbers, dates, times, strings, sets, etc. Note that the boundary value need not be of the same type that was annotated, so long as they can be compared: Annotated[int, Gt(1.5)] is fine, for example, and implies that the value is an integer x such that x > 1.5. No interpretation is specified for special values such as nan.

We suggest that implementors may also interpret functools.partial(operator.le, 1.5) as being equivalent to Gt(1.5), for users who wish to avoid a runtime dependency on the annotated-types package.

To be explicit, these types have the following meanings:

  • Gt(x) - value must be "Greater Than" x - equivalent to exclusive minimum
  • Ge(x) - value must be "Greater than or Equal" to x - equivalent to inclusive minimum
  • Lt(x) - value must be "Less Than" x - equivalent to exclusive maximum
  • Le(x) - value must be "Less than or Equal" to x - equivalent to inclusive maximum

Interval

Interval(gt, ge, lt, le) allows you to specify an upper and lower bound with a single metadata object. None attributes should be ignored, and non-None attributes treated as per the single bounds above.

MultipleOf

MultipleOf(multiple_of=x) might be interpreted in two ways:

  1. Python semantics, implying value % multiple_of == 0, or
  2. JSONschema semantics, where int(value / multiple_of) == value / multiple_of.

We encourage users to be aware of these two common interpretations and their distinct behaviours, especially since very large or non-integer numbers make it easy to cause silent data corruption due to floating-point imprecision.

We encourage libraries to carefully document which interpretation they implement.

Len

Len() implies that min_inclusive <= len(value) < max_exclusive. We recommend that libraries interpret slice objects identically to Len(), making all the following cases equivalent:

  • Annotated[list, :10]
  • Annotated[list, 0:10]
  • Annotated[list, None:10]
  • Annotated[list, slice(0, 10)]
  • Annotated[list, Len(0, 10)]
  • Annotated[list, Len(max_exclusive=10)]

And of course you can describe lists of three or more elements (Len(min_inclusive=3)), four, five, or six elements (Len(4, 7) - note exclusive-maximum!) or exactly eight elements (Len(8, 9)).

Implementors: note that Len() should always have an integer value for min_inclusive, but slice objects can also have start=None.

Timezone

Timezone can be used with a datetime or a time to express which timezones are allowed. Annotated[datetime, Timezone[None]] must be a naive datetime. Timezone[...] (literal ellipsis) expresses that any timezone-aware datetime is allowed. You may also pass a specific timezone string or timezone object such as Timezone[timezone.utc] or Timezone["Africa/Abidjan"] to express that you only allow a specific timezone, though we note that this is often a symptom of fragile design.

Predicate

Predicate(func: Callable) expresses that func(value) is truthy for valid values. Users should prefer the statically inspectable metadata above, but if you need the full power and flexibility of arbitrary runtime predicates... here it is.

We provide a few predefined predicates for common string constraints: IsLower = Predicate(str.islower), IsUpper = Predicate(str.isupper), and IsDigit = Predicate(str.isdigit). Users are encouraged to use methods which can be given special handling, and avoid indirection like lambda s: s.lower().

Some libraries might have special logic to handle known or understandable predicates, for example by checking for str.isdigit and using its presence to both call custom logic to enforce digit-only strings, and customise some generated external schema.

We do not specify what behaviour should be expected for predicates that raise an exception. For example Annotated[int, Predicate(str.isdigit)] might silently skip invalid constraints, or statically raise an error; or it might try calling it and then propogate or discard the resulting TypeError: descriptor 'isdigit' for 'str' objects doesn't apply to a 'int' object exception. We encourage libraries to document the behaviour they choose.

Design & History

This package was designed at the PyCon 2022 sprints by the maintainers of Pydantic and Hypothesis, with the goal of making it as easy as possible for end-users to provide more informative annotations for use by runtime libraries.

It is deliberately minimal, and following PEP-593 allows considerable downstream discretion in what (if anything!) they choose to support. Nonetheless, we expect that staying simple and covering only the most common use-cases will give users and maintainers the best experience we can. If you'd like more constraints for your types - follow our lead, by defining them and documenting them downstream!

Comments
  • add GroupedMetadata as a base class for Interval

    add GroupedMetadata as a base class for Interval

    The main idea here is to generalize the pattern in Interval so that Pydantic and similar can define their Field as inheriting from GroupedMetadata, thus anything that can parse annotated-types will know how to unpack it (if it is not already unpacked by the PEP-646) even if it is a custom subclass in a library like Pydantic.

    Without this, some generic annotated-types parser would not know how to handle Pydantic's Field type without specific knowledge of Pydantic.

    opened by adriangb 19
  • I think we should remove `Regex`

    I think we should remove `Regex`

    I tried writing up Regex docs that would describe how people actually want to use them, and...

    Regex(regex_pattern=p, regex_flags=x) implies that the string should contain a match for p with flags x, at any position in the string. If you want the full string to match, or the match to be at the start or end of the string, you can use boundary markers like ^...$.

    Regex() can be used with unicode strings or byte strings; if either are allowed we suggest using one Regex() item for each type, and therefore ignoring Regex() items with the wrong string type.

    We do not specify the pattern syntax: libraries may choose to interpret it as the Python stdlib re module, regex package, ECMAscript syntax, etc., and we encourage them to clearly document their chosen interpretation. The meaning of the regex_flags argument is also implementation-defined.

    I think this is sufficiently-implementation-defined that we should just, well, make downstream implementations define their own Regex metadata with more precise semantics. Either that, or we explicitly separate PythonRegex(pattern: str|bytes, flags: int) from JSONschemaRegex(pattern: str) and make people choose one.

    opened by Zac-HD 14
  • Switch to hatchling & pip-tools

    Switch to hatchling & pip-tools

    poetry was annoying me, also it's lock file was out of date.

    Changes here:

    • use hatchling for build
    • use pip-compile (from pip-tools) to lock linting and testing requirements
    • tweak pre-commit
    • use pre-commit for linting
    • check github ref before release
    • add python 3.11 classifier
    • remove CI caching - seems to reduce CI time from ~1m20 to <20seconds, also simplifies CI setup
    opened by samuelcolvin 8
  • Magic syntax for constraints: `X > 5` -> `Gt(5)`, etc.

    Magic syntax for constraints: `X > 5` -> `Gt(5)`, etc.

    Closes #28; needs some more tests for validation errors and edge cases if we decide to go for it, and of course documentation.

    Importantly, adding most of the magic to our existing Interval and Len classes means that we can keep the API surface minimal, without exposing 'incomplete constraint' objects that don't really mean anything. Note that I've set this up to error out immediately if you try using the comparison magics on something which already represents a constraint. You can still do e.g. Interval(gt=3) < 5; this seems weird but OK to me in that it's very clear what it does and there's no duplication of bounds.

    opened by Zac-HD 6
  • ImportError: cannot import name 'Gt' from 'annotated_types'

    ImportError: cannot import name 'Gt' from 'annotated_types'

    Hi,

    I got an ImportError when I was trying to import Gt:

    ImportError: cannot import name 'Gt' from 'annotated_types'
        (/root/venv/lib/python3.9/site-packages/annotated_types/__init__.py)
    

    I tried installing and using the package on my personal computer (Python 3.9.12) and an EC2 instance (Python 3.9.13). The same error showed. I am wondering how to solve the problem.

    Thanks!

    opened by ychen878 6
  • use type aliases for shorthand constraints

    use type aliases for shorthand constraints

    https://github.com/annotated-types/annotated-types/pull/5#issuecomment-1118094109

    I think these names make more sense when they're wrapping the type, but I'm happy to keep the old ones

    opened by adriangb 6
  • Rethink `max_exclusive`, convert to `max_inclusive`?

    Rethink `max_exclusive`, convert to `max_inclusive`?

    If we have MaxLen, and (in pydantic at least) that can be set via a max_length argument.

    I really think this should mean "maximum inclusive", not "maximum exclusive" as currently documented.

    This matches (IMHO) much better people's assumption about what MaxLen(5) or max_length=5 or Len(0, 5) means:

    "The airbnb allows maximum 5 guests", you would assume 5 guests were allowed, not just 4

    If for the sake of correctness, that involves either:

    • treating slices differently
    • or, removing the recommendation on allowing slices

    That's sad, but I think a price worth paying.

    At the end of the way max_length=5 meaning any length up to 4, won't fly in pydantic.

    opened by samuelcolvin 5
  • Operator based `BaseMetadata` creation

    Operator based `BaseMetadata` creation

    Hi all, I really like the idea of BaseMetadata to formulate Annotated types.

    However, configuration using functions such as Gt(10) is not as easy as x > 10 to read. It would be nice to provide a non-constraint class All and support methods such as __gt__ like below.

    class All(BaseMetadata):
        def __gt__(self, other):
            return Gt(other)
    
    X = All()  # could be better to keep `All` private and provide `X` publically.
    Annotated[int, X > 5]  # equivalent to Annotated[int, Gt(5)]
    

    Similar logic can by applied to construct several objects.

    • Interval from 2 < X < 5
    • MultipleOf from X % 3 == 0
    • MaxLen from X.len() <= 10
    opened by hanjinliu 4
  • Chained constraints

    Chained constraints

    Thought from #28: we could also do something like Exponent = Annotated[int, Gt(0) & MultipleOf(2)].

    Implementing & would be easy: it just creates a GroupedMetadata.

    Other operators would be trickier: Annotated[str, Foo | Bar] should actually be expressed as Annotated[str, Bar] | Annotated[str, Foo], and I have no idea what XOR would be, I think we’d have to offload that to implementers. So I think this idea has short legs.

    opened by adriangb 2
  • Use `any()` instead of for loop & Invert `any/all` to simplify comparisons

    Use `any()` instead of for loop & Invert `any/all` to simplify comparisons

    Using Python's any() and all() built-in functions is a more concise way of doing this than using a for a loop.

    any() will return True when at least one of the elements evaluates to True, all() will return True only when all the elements evaluate to True.

    opened by yezz123 2
  • List of annotations vs. nested annotations

    List of annotations vs. nested annotations

    Hello, I was brought here from https://github.com/samuelcolvin/pydantic/discussions/4110.

    Perhaps this feature is already available or on the drawing-board, but I just wanted to say that it would be very useful if the user could provide a list of type-annotations like this:

    age: Annotated[int, [Gt(18), Lt(35)]]
    

    instead of having to recursively nest the annotations like this (although this should also be possible in case that might be useful in some applications):

    age: Annotated[Annotated[int, Gt(18)], Lt(35)]
    

    Thanks!

    opened by Hvass-Labs 2
Releases(v0.4.0)
  • v0.4.0(Oct 12, 2022)

    What's Changed

    • Switch to hatchling & pip-tools by @samuelcolvin in https://github.com/annotated-types/annotated-types/pull/22
    • convert Len to GroupedMetadata, add MinLen and MaxLen by @samuelcolvin in https://github.com/annotated-types/annotated-types/pull/21
    • switch from max_exclusive to max_length (inclusive) by @samuelcolvin in https://github.com/annotated-types/annotated-types/pull/24

    Full Changelog: https://github.com/annotated-types/annotated-types/compare/v0.3.1...v0.4.0

    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Sep 25, 2022)

    What's Changed

    • Add BaseMetadata to __all__ by @samuelcolvin in https://github.com/annotated-types/annotated-types/pull/19

    Full Changelog: https://github.com/annotated-types/annotated-types/compare/v0.3.0...v0.3.1

    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Sep 25, 2022)

    What's Changed

    • Remove regex from tests by @adriangb in https://github.com/annotated-types/annotated-types/pull/13
    • add GroupedMetadata as a base class for Interval by @adriangb in https://github.com/annotated-types/annotated-types/pull/12
    • Remove regex from tests (again) by @adriangb in https://github.com/annotated-types/annotated-types/pull/14
    • use init_subclass instead of ABC by @adriangb in https://github.com/annotated-types/annotated-types/pull/16
    • add docs for GroupedMetadata and BaseMetadata by @adriangb in https://github.com/annotated-types/annotated-types/pull/15

    Full Changelog: https://github.com/annotated-types/annotated-types/compare/v0.2.0...v0.3.0

    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Jun 15, 2022)

Trustworthy AI related projects

Trustworthy AI This repository aims to include trustworthy AI related projects from Huawei Noah's Ark Lab. Current projects include: Causal Structure

HUAWEI Noah's Ark Lab 589 Dec 30, 2022
A Novel Plug-in Module for Fine-grained Visual Classification

Pytorch implementation for A Novel Plug-in Module for Fine-Grained Visual Classification. fine-grained visual classification task.

ChouPoYung 109 Dec 20, 2022
PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".

Sharpness-aware Quantization for Deep Neural Networks Recent Update 2021.11.23: We release the source code of SAQ. Setup the environments Clone the re

Zhuang AI Group 30 Dec 19, 2022
Official repository for the paper "Self-Supervised Models are Continual Learners" (CVPR 2022)

Self-Supervised Models are Continual Learners This is the official repository for the paper: Self-Supervised Models are Continual Learners Enrico Fini

Enrico Fini 73 Dec 18, 2022
This is an official implementation for "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" on Object Detection and Instance Segmentation.

Swin Transformer for Object Detection This repo contains the supported code and configuration files to reproduce object detection results of Swin Tran

Swin Transformer 1.4k Dec 30, 2022
AI Flow is an open source framework that bridges big data and artificial intelligence.

Flink AI Flow Introduction Flink AI Flow is an open source framework that bridges big data and artificial intelligence. It manages the entire machine

144 Dec 30, 2022
Transfer Learning Remote Sensing

Transfer_Learning_Remote_Sensing Simulation R codes for data generation and visualizations are in the folder simulation. Experiment: California Housin

2 Jun 21, 2022
Rot-Pro: Modeling Transitivity by Projection in Knowledge Graph Embedding

Rot-Pro : Modeling Transitivity by Projection in Knowledge Graph Embedding This repository contains the source code for the Rot-Pro model, presented a

Tewi 9 Sep 28, 2022
A Vision Transformer approach that uses concatenated query and reference images to learn the relationship between query and reference images directly.

A Vision Transformer approach that uses concatenated query and reference images to learn the relationship between query and reference images directly.

24 Dec 13, 2022
PyTorch implementation of PP-LCNet: A Lightweight CPU Convolutional Neural Network

PyTorch implementation of PP-LCNet Reproduction of PP-LCNet architecture as described in PP-LCNet: A Lightweight CPU Convolutional Neural Network by C

Quan Nguyen (Fly) 47 Nov 02, 2022
Official codebase for Pretrained Transformers as Universal Computation Engines.

universal-computation Overview Official codebase for Pretrained Transformers as Universal Computation Engines. Contains demo notebook and scripts to r

Kevin Lu 210 Dec 28, 2022
Python wrappers to the C++ library SymEngine, a fast C++ symbolic manipulation library.

SymEngine Python Wrappers Python wrappers to the C++ library SymEngine, a fast C++ symbolic manipulation library. Installation Pip See License section

136 Dec 28, 2022
Simple Dynamic Batching Inference

Simple Dynamic Batching Inference 解决了什么问题? 众所周知,Batch对于GPU上深度学习模型的运行效率影响很大。。。 是在Inference时。搜索、推荐等场景自带比较大的batch,问题不大。但更多场景面临的往往是稀碎的请求(比如图片服务里一次一张图)。 如果

116 Jan 01, 2023
Code for Emergent Translation in Multi-Agent Communication

Emergent Translation in Multi-Agent Communication PyTorch implementation of the models described in the paper Emergent Translation in Multi-Agent Comm

Facebook Research 75 Jul 15, 2022
A template repository for submitting a job to the Slurm Cluster installed at the DISI - University of Bologna

Cluster di HPC con GPU per esperimenti di calcolo (draft version 1.0) Per poter utilizzare il cluster il primo passo è abilitare l'account istituziona

20 Dec 16, 2022
Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite.

TFLite-MobileStereoNet Python scripts for performing stereo depth estimation using the MobileStereoNet model in Tensorflow Lite. Stereo depth estimati

Ibai Gorordo 4 Feb 14, 2022
VOLO: Vision Outlooker for Visual Recognition

VOLO: Vision Outlooker for Visual Recognition, arxiv This is a PyTorch implementation of our paper. We present Vision Outlooker (VOLO). We show that o

Sea AI Lab 876 Dec 09, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
An AFL implementation with UnTracer (our coverage-guided tracer)

UnTracer-AFL This repository contains an implementation of our prototype coverage-guided tracing framework UnTracer in the popular coverage-guided fuz

113 Dec 17, 2022
Unofficial TensorFlow implementation of the Keyword Spotting Transformer model

Keyword Spotting Transformer This is the unofficial TensorFlow implementation of the Keyword Spotting Transformer model. This model is used to train o

Intelligent Machines Limited 8 May 11, 2022