Python library for the analysis of dynamic measurements

Overview

the logo of PyDynamic

CircleCI pipeline status badge PyDynamic's ReadTheDocs status  PyDynamic's CodeCov badge  PyDynamic's Codacy badge  PyDynamic's PyPI version number PyPI - license badge DOI

Python library for the analysis of dynamic measurements

The goal of this library is to provide a starting point for users in metrology and related areas who deal with time-dependent i.e., dynamic, measurements. The initial version of this software was developed as part of a joint research project of the national metrology institutes from Germany and the UK, i.e. Physikalisch-Technische Bundesanstalt and the National Physical Laboratory.

Further development and explicit use of PyDynamic is part of the European research project EMPIR 17IND12 Met4FoF and the German research project FAMOUS.

Table of content

Quickstart

To dive right into it, install PyDynamic and execute one of the examples:

(my_PyDynamice_venv) $ pip install PyDynamic
Collecting PyDynamic
[...]
Successfully installed PyDynamic-[...]
(my_PyDynamice_venv) $ python
Python 3.9.7 (default, Aug 31 2021, 13:28:12) 
[GCC 11.1.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from PyDynamic.examples.uncertainty_for_dft.deconv_DFT import DftDeconvolutionExample
>>> DftDeconvolutionExample()
Propagating uncertainty associated with measurement through DFT
Propagating uncertainty associated with calibration data to real and imag part
Propagating uncertainty through the inverse system
Propagating uncertainty through the low-pass filter
Propagating uncertainty associated with the estimate back to time domain

You will see a couple of plots opening up to observe the results. For further information just read on and visit our tutorial section.

Features

PyDynamic offers propagation of uncertainties for

  • application of the discrete Fourier transform and its inverse
  • filtering with an FIR or IIR filter with uncertain coefficients
  • design of a FIR filter as the inverse of a frequency response with uncertain coefficients
  • design on an IIR filter as the inverse of a frequency response with uncertain coefficients
  • deconvolution in the frequency domain by division
  • multiplication in the frequency domain
  • transformation from amplitude and phase to a representation by real and imaginary parts
  • 1-dimensional interpolation

For the validation of the propagation of uncertainties, the Monte-Carlo method can be applied using a memory-efficient implementation of Monte-Carlo for digital filtering.

Module diagram

The fundamental structure of PyDynamic is shown in the following figure.

PyDynamic module diagram

However, imports should generally be possible without explicitly naming all packages and modules in the path, so that for example the following import statements are all equivalent.

from PyDynamic.uncertainty.propagate_filter import FIRuncFilter
from PyDynamic.uncertainty import FIRuncFilter
from PyDynamic import FIRuncFilter

Documentation

The documentation for PyDynamic can be found on ReadTheDocs

Installation

The installation of PyDynamic is as straightforward as the Python ecosystem suggests. Detailed instructions on different options to install PyDynamic you can find in the installation section of the docs.

Contributing

Whenever you are involved with PyDynamic, please respect our Code of Conduct . If you want to contribute back to the project, after reading our Code of Conduct, take a look at our open developments in the project board , pull requests and search the issues . If you find something similar to your ideas or troubles, let us know by leaving a comment or remark. If you have something new to tell us, feel free to open a feature request or bug report in the issues. If you want to contribute code or improve our documentation, please check our contribution advices and tips.

If you have downloaded this software, we would be very thankful for letting us know. You may, for instance, drop an email to one of the authors (e.g. Sascha Eichstädt, Björn Ludwig or Maximilian Gruber )

Examples

We have collected extended material for an easier introduction to PyDynamic in the package examples. Detailed assistance on getting started you can find in the corresponding sections of the docs:

In various Jupyter Notebooks and scripts we demonstrate the use of the provided methods to aid the first steps in PyDynamic. New features are introduced with an example from the beginning if feasible. We are currently moving this supporting collection to an external repository on GitHub. They will be available at github.com/PTB-M4D/PyDynamic_tutorials in the near future.

Roadmap

  1. Implementation of robust measurement (sensor) models
  2. Extension to more complex noise and uncertainty models
  3. Introducing uncertainty propagation for Kalman filters

For a comprehensive overview of current development activities and upcoming tasks, take a look at the project board, issues and pull requests.

Citation

If you publish results obtained with the help of PyDynamic, please use the above linked Zenodo DOI for the code itself or cite

Sascha Eichstädt, Clemens Elster, Ian M. Smith, and Trevor J. Esward Evaluation of dynamic measurement uncertainty – an open-source software package to bridge theory and practice J. Sens. Sens. Syst., 6, 97-105, 2017, DOI: 10.5194/jsss-6-97-2017

Acknowledgement

Part of this work is developed as part of the Joint Research Project 17IND12 Met4FoF of the European Metrology Programme for Innovation and Research (EMPIR).

This work was part of the Joint Support for Impact project 14SIP08 of the European Metrology Programme for Innovation and Research (EMPIR). The EMPIR is jointly funded by the EMPIR participating countries within EURAMET and the European Union.

Disclaimer

This software is developed at Physikalisch-Technische Bundesanstalt (PTB). The software is made available "as is" free of cost. PTB assumes no responsibility whatsoever for its use by other parties, and makes no guarantees, expressed or implied, about its quality, reliability, safety, suitability or any other characteristic. In no event will PTB be liable for any direct, indirect or consequential damage arising in connection with the use of this software.

License

PyDynamic is distributed under the LGPLv3 license except for the module impinvar.py in the package misc , which is distributed under the GPLv3 license .

Comments
  • Rewrite inverse methods of model estimation

    Rewrite inverse methods of model estimation

    This will finally resolve #147.

    Merge and rewrite

    • [x] LSIIR, invLSIIR and invLSIIR_unc based on Monte Carlo,
    • [x] LSFIR, invLSFIR, invLSFIR_unc, invLSFIR_uncMC.
    • [x] After reviewing all this, we should merge #251 into this one here and then merge all that into prepare_major_release_2.0.0

    Checkout the docs as well to be sure they look nicely: https://pydynamic.readthedocs.io/en/fix-147_rewrite_inverse_methods_of_model_estimation

    opened by BjoernLudwigPTB 7
  • Rename deconvolution and identification to model_estimation

    Rename deconvolution and identification to model_estimation

    To avoid unambiguous method naming we will combine all methods from identification and devonvolution in one new module model_estimation including the fit_filter.py methods with same namings in both modules. That requires us to rename some methods out of devonvolution.fit_filter.py. Because of the following incompatibility with previous versions of PyDynamic we need to insert a deprecation warning into the next minor release and inform users about the upcoming change.

    major change 
    opened by BjoernLudwigPTB 7
  • Presumably shifted uncertainty result for FIRuncFilter for non-stationary uncertainty

    Presumably shifted uncertainty result for FIRuncFilter for non-stationary uncertainty

    In cases of non-stationary uncertainty (king="diag"), the output of FIRuncFilter returns an uncertainty-result, that seems to be shifted.

    Looking into the code, this is very likely caused by the equations to calculate UncCov: L.182 + L.186 + L.190 + L.192. Here, theta and Ulow are multiplied. However, for values in theta it holds theta[i+1] refers to an earlier point in time than theta[i] - while in Ulow it is reversed. Therefore, during the multiplication values are combined, that do not correspond in their time-order. To fix this, theta probably needs to be reversed, to achieve correct (time-ascending) order.

    It seems like this misbehaviour is not covered by tests or examples so far. Therefore a suitable test/example should be introduced (Maybe against some MC-method). A comparison with the upcoming IIRuncFilter could also be an option.

    Environment

    • OS: Windows 10
    • PyDynamic Version 1.6.0 (direct from master)

    MWE showing comparison with MC-result

    import matplotlib.pyplot as plt
    import numpy as np
    
    from PyDynamic.uncertainty.propagate_MonteCarlo import MC
    from PyDynamic.uncertainty.propagate_filter import FIRuncFilter
    
    n_signal = 100
    n_filter = 50
    
    x = np.append(np.arange(n_signal//2), np.zeros(n_signal//2))
    ux = 0.1 * np.abs(x)
    
    theta = np.array([0.5,0.5] + [0] * (n_filter - 2))
    utheta = np.zeros((n_filter, n_filter))
    
    
    # run filter
    y, uy = FIRuncFilter(
        x, ux, theta, utheta, kind="diag"
    )  # apply uncertain FIR filter (GUM formula)
    yMC, uyMC = MC(
        x, ux, theta, [1.0], utheta, runs=1000
    )  # apply uncertain FIR filter (Monte Carlo)
    
    
    # compare FIR and MC results
    fig, ax = plt.subplots(nrows=2)
    
    ax[0].plot(x, label="x")
    ax[0].plot(theta, label="theta")
    ax[0].plot(y, label="y")
    ax[0].plot(yMC, label="y (MC)")
    ax[0].legend()
    
    ax[1].plot(ux, label="ux")
    ax[1].plot(np.diag(utheta), label="utheta")
    ax[1].plot(uy, label="uy")
    ax[1].plot(np.sqrt(np.diag(uyMC)), label="uy (MC)")
    ax[1].legend()
    
    plt.show()
    

    Output of MWE Current situation: issue_on

    Expected result: issue_off

    Note The effect becomes especially obvious for long filter lengths and if the signal gets appended by many zeros.

    bug 
    opened by mgrub 6
  • Colored noise in FIRuncFilter and provide utility functions

    Colored noise in FIRuncFilter and provide utility functions

    Requires detailled review of implemented new methods

    • check normalization of created ideal autocorrelation
    • check normalization of generated colored noise from white noise
    • check execution of propagate-filter (compare sigma as float or np.ndarray)
    • review tests
    opened by mgrub 5
  • Revise FIRuncFilter full covariance

    Revise FIRuncFilter full covariance

    This pull request addresses issue #175 in some way.

    By introducing a new function _fir_filter we can propagate full covariance information into the output of an FIR-filter. Based on this function a wrapper FIRuncFilter_2 is introduced, mimicing the behaviour of the existing FIRuncFilter. (And thus preparing a potential replacement lateron.)

    Benefits of the new function(s):

    • input can now have full covariance information as well
    • return full covariance of output
    • reduced complexity of the code
    • no more python-for loops (all is done using 2D-convolution)
    • reduce computations if Utheta == None or Ux is None (this was already done for the FIRuncFilter, but in a much more complex way)
    • control over initial conditions of the FIR filter (at least limited)

    A visual comparison with Monte Carlo covariance result show good agreement. comparison_fir_mc

    TODO: A quick runtime comparison of both methods should be done to evaluate potential performance gains/losses.

    opened by mgrub 4
  • Fix frequency argument in sine and multi_sine

    Fix frequency argument in sine and multi_sine

    I just worked with the sine and multi_sine methods and the freq-argument gets used in different ways, than I would expect it:

    In misc.testsignals.sine line 185 should be replaced by x = amp * np.sin(2 * np.pi * freq * time) (all multiplication). Alternativly, the freq parameter could be renamed to period.

    In misc.testsignals.multi_sine line 213 should be replaced by x += amp * np.sin(2 * np.pi * freq * time) (scale sine by 2pi). Alternativly, the freq parameter could be renamed to angular_frequency.

    To have a matching signature of both sine and multi_sine, I suggest to keep using the freq-argument, but adjust their use in the code as proposed.

    Some references for the suggested changes can be found here: https://de.wikipedia.org/wiki/Sinuston https://en.wikipedia.org/wiki/Angular_frequency

    bug major change 
    opened by mgrub 4
  • Feature request: Implementation of interpolation methods for non-equidistant samples

    Feature request: Implementation of interpolation methods for non-equidistant samples

    Some digital sensors generate their own sample clock. This clock fluctuates considerably which leads to not equidistant sample distances. FFT, DFT and wavelet transformations need equidistant samples, so it would be nice if PyDynamic could implement interpolation methods with error poropagation.

    Beste regards Benedikt

    enhancement 
    opened by BeneSeePTB 4
  • Increase threshold for test_ARMA

    Increase threshold for test_ARMA

    Apparently one of the tolerances in our test suite for checking ARMA() is too strict. See some of the most recent failed instances:

    • https://app.circleci.com/pipelines/github/PTB-M4D/PyDynamic/2711/workflows/666c9563-ff15-4b60-892a-9d4380271275/jobs/14973/tests#failed-test-0
    • https://app.circleci.com/pipelines/github/PTB-M4D/PyDynamic/2711/workflows/666c9563-ff15-4b60-892a-9d4380271275/jobs/14975/tests#failed-test-0
    • https://app.circleci.com/pipelines/github/PTB-M4D/PyDynamic/2709/workflows/c4768521-6f49-4295-addc-1ec2fd4162b1/jobs/14961/tests#failed-test-0
    • https://app.circleci.com/pipelines/github/PTB-M4D/PyDynamic/2705/workflows/9c5f9554-7840-449b-8ab1-85d7f2570dde/jobs/14938/tests#failed-test-0

    It looks like we should increase to a value of atol = 0.22. The line above that assertion fails less often, but the following failed instance suggests to raise the tolerance for that as well:

    • https://app.circleci.com/pipelines/github/PTB-M4D/PyDynamic/2704/workflows/164c507d-354e-441d-a5ea-d940d5e1d907/jobs/14929/tests#failed-test-0

    Seems as if rtol = 0.22 might be a good choice as well.

    opened by BjoernLudwigPTB 3
  • convolve_unc allow 1D input arrays for uncertainty

    convolve_unc allow 1D input arrays for uncertainty

    For convenience, this implements the use of 1D-arrays for U1 and U2 in PyDynamic.uncertainty.convolve_unc to specify the standard uncertainties instead of full covariance matrices.

    Specification of uncertainties using full covariance matrices remains possible and the output is still returned as full covariance matrix. Hence, no breaking change is introduced.

    This PR makes required changes on:

    • relevant test strategy
    • docstring
    • actual code change
    enhancement feature request 
    opened by mgrub 3
  • __version__ missing?

    __version__ missing?

    I just updated to version v1.10.0. When I tried to check and verify, if I have new version (I also checked with same command that previous installed version was v1.9.0) I got an error message:

    import PyDynamic PyDynamic.__version__

    AttributeError Traceback (most recent call last) in 1 import PyDynamic ----> 2 PyDynamic.__version__

    AttributeError: module 'PyDynamic' has no attribute '__version__'

    Other attributes like "__doc__" or "__package__" are available.

    cosmetics 
    opened by Ma-Weber 3
  • Prepare major release 2.0.0

    Prepare major release 2.0.0

    Here we collect all preparations of the next major release with all those BREAKING CHANGES. After merging this we should have:

    • [x] resolved #166
    • [x] resolved #163
    • [x] resolved #154
    • [x] resolved #147 (which is handled by #189)
    • [x] resolved #135 (which was handled by #188)
    • [x] resolved #134 (which was handled by #180)
    • [x] resolved #133 (which was handled by #158)
    • [x] resolved #109 (which was handled by #143)
    • [x] resolved #86 (which was handled by #142)
    • [x] resolved #66 (which was handled by #187)
    • [x] resolved #49
    • [x] resolved #31 (which was handled by #181).
    opened by BjoernLudwigPTB 3
  • Wrong calculation of covariance matrix in `UMC_generic`

    Wrong calculation of covariance matrix in `UMC_generic`

    Describe the bug UMC_generic has an error in the calculation of the covariance matrix after the first block. The covariance therefore has a major mismatch to a basic MC implementation if only one (or a few) blocks are evaluated. If the number of blocks is big, this error wears off due to averaging and inherent random fluctuations.

    To Reproduce

    from PyDynamic.uncertainty.propagate_MonteCarlo import UMC_generic
    import numpy as np
    import matplotlib.pyplot as plt
    
    # init
    x = np.linspace(0,5,num=11)
    ux = np.full_like(x, 0.5)
    n_runs = 2000
    
    draw_samples = lambda size: np.random.normal(x, ux, size=(size,x.size))
    evaluate = lambda X: X + 1
    
    # PyDynamic
    y, Uy, _, output_shape = UMC_generic(draw_samples, evaluate=evaluate, runs=n_runs, blocksize=2000, n_cpu=1) 
    
    # naiv Monte Carlo
    results = []
    for sample in draw_samples(n_runs):
        result = evaluate(sample)
        results.append(result)
    y_mc = np.mean(results, axis=0)
    Uy_mc = np.cov(results, rowvar=False)
    
    # comparison
    fig, ax = plt.subplots(1, 3)
    ax[0].plot(y - y_mc)
    ax[1].plot(np.diag(Uy) - np.diag(Uy_mc))
    ax[2].imshow(Uy - Uy_mc)
    plt.show()
    
    # comparison
    print("median y - y_mc   (ideal: 0.0): ", np.median(y - y_mc))
    print("median Uy - Uy_mc (ideal: 0.0): ", np.median(Uy - Uy_mc))
    print("median diag(Uy) / np.diag(Uy_mc) (ideal: 1.0): ", np.median(np.diag(Uy) / np.diag(Uy_mc)))
    

    Exemplary Output:

    median y - y_mc   (ideal: 0.0):  -0.014300805445340181
    median Uy - Uy_mc (ideal: 0.0):  0.8981734069108699
    median diag(Uy) / np.diag(Uy_mc) (ideal: 1.0):  1987.8304933280958
    

    As can be seen, if only one block (of 2000 samples) is executed, the elements of the main diagonal are roughly factor 2000 apart from the basic MC implementation.

    Potential Solution Here, the code should normalize by the number of samples in the current block.

    Environment (please complete the following information):

    • OS: Windows 10
    • PyDynamic Version 2.3.0
    bug 
    opened by mgrub 0
  • Add shapes to propagate_filter.py

    Add shapes to propagate_filter.py

    At the moment we provide no shapes in the docstrings of this module. This is inconsistent with how we provide documentation for modules like interpolate.

    documentation 
    opened by BjoernLudwigPTB 0
  • Reasonably treat non-equidistant signal's sampling interval length and frequency

    Reasonably treat non-equidistant signal's sampling interval length and frequency

    Is your feature request related to a problem? Please describe. At the moment the signal class if not provided by the user, computes the sampling interval length as kind of a weighted sum of the different occuring interval legths in the time vector. The stored interval length is taken as the arithmetic mean of all unique interval lengths in the time vector. The sampling frequency if not provided then is taken as the reciprocal of that. This can result in kind of inconsistent data in an instances attributes, e.g. in case there is one single interval being much larger than all other possibly very short intervals. If you then only look at the frequency and the values, you might draw wrong conclusions. We do not see clearly though, what are the use cases of the two attributes.

    Describe the solution you'd like We wish for a profound reasoning behind the way of computing the interval length and sampling frequency, which is then well documented in the corresponding part of the docs.

    Describe alternatives you've considered One way would be to find the most reasonable mean (unweighted arithmetic/geometric, harmonic, median) of interval length and compute that for the provided time vector. One could as well compute the frequency and interval length only when needed (when is that?) and maybe even interpolate the values at the resulting time instances for these cases?!

    Additional context We are grateful for any input on that from any expert user just as a comment in this issue to start with.

    feature request 
    opened by BjoernLudwigPTB 0
  • Include the documentation's FIR example additional content into notebook and delete rst-file

    Include the documentation's FIR example additional content into notebook and delete rst-file

    opened by BjoernLudwigPTB 0
  • Consider unifying the treatment of complex vectors by either working with real and imaginary parts or with complex values at least internally

    Consider unifying the treatment of complex vectors by either working with real and imaginary parts or with complex values at least internally

    For instance in the module model_estimation.fit_filter we are converting back and forth those two formats and could probably improve performance by sticking to one form. This requires thorough checks of the formulas for the arithmetic operation though. Other modules are affected as well probably, e.g. the GUM2DFT related stuff.

    opened by BjoernLudwigPTB 0
Releases(v2.3.2)
Owner
Physikalisch-Technische Bundesanstalt - Department 9.4 'Metrology for the digital Transformation'
All open-source repositories written by members of our department 9.4 for metrology research projects or our every day work.
Physikalisch-Technische Bundesanstalt - Department 9.4 'Metrology for the digital Transformation'
Class and mathematical functions for quaternion numbers.

Quaternions Class and mathematical functions for quaternion numbers. Installation Python This is a Python 3 module. If you don't have Python installed

3 Nov 08, 2022
Solutions to the language assignment for Internship in JALA Technologies.

Python Assignment Solutions (JALA Technologies) Solutions to the language assignment for Internship in JALA Technologies. Features Properly formatted

Samyak Jain 2 Jan 17, 2022
chiarose(XCR) based on chia(XCH) source code fork, open source public chain

chia-rosechain 一个无耻的小活动 | A shameless little event 如果您喜欢这个项目,请点击star 将赠送您520朵玫瑰,可以去 facebook 留下您的(xcr)地址,和github用户名。 If you like this project, please

ddou123 376 Dec 14, 2022
Python program that generates random user from API

RandomUserPy Author kirito sate #modules used requests, json, tkinter, PIL, urllib, io, install requests and PIL modules from pypi pip install Pillow

kiritosate 1 Jan 05, 2022
Python NZ COVID Pass Verifier/Generator

Python NZ COVID Pass Verifier/Generator This is quick proof of concept verifier I coded up in a few hours using various libraries to parse and generat

NZ COVID Pass Community 12 Jan 03, 2023
Flow control is the order in which statements or blocks of code are executed at runtime based on a condition. Learn Conditional statements, Iterative statements, and Transfer statements

03_Python_Flow_Control Introduction 👋 The control flow statements are an essential part of the Python programming language. A control flow statement

Milaan Parmar / Милан пармар / _米兰 帕尔马 209 Oct 31, 2022
Python client SDK designed to simplify integrations by automating key generation and certificate enrollment using Venafi machine identity services.

This open source project is community-supported. To report a problem or share an idea, use Issues; and if you have a suggestion for fixing the issue,

Venafi, Inc. 13 Sep 27, 2022
Grail(TM) is a web browser written in Python

Grail is distributed in source form. It requires that you have a Python interpreter and a Tcl/Tk installation, with the Python interpreter configured for Tcl/Tk support.

22 Oct 18, 2022
Click2call for asterisk with python

Click2call para Asterisk com Python Este projeto disponibiliza uma API construíd

Benedito Marques 1 Jan 17, 2022
Graveyard is an attempt at open-source reimplementation of DraciDoupe.cz

Graveyard: Place for Dead (and Undead) Graveyard is an attempt at open-source reimplementation of DraciDoupe.cz (referred to as DDCZ in this text). De

DraciDoupe.cz 5 Mar 17, 2022
GA SEI Unit 4 project backend for Bloom.

Grow Your OpportunitiesTM Background Watch the Bloom Intro Video At Bloom, we believe every job seeker deserves an opportunity to find meaningful work

Jonathan Herman 3 Sep 20, 2021
Covid-19-Trends - A project that me and my friends created as the CSC110 Final Project at UofT

Covid-19-Trends Introduction The COVID-19 pandemic has caused severe financial s

1 Jan 07, 2022
北大选课网2021年春季验证码识别

北大选课网验证码识别 2021 年春季学期 Powered by Elector Quartet (@Rabbit, @xmcp, @SpiritedAwayCN, @gzz) 数据集描述 最初的数据集为 5130 张人工标记的验证码,之后利用早期训练好的模型在选课网上进行自动验证 (自举),又收集

Rabbit 27 Sep 17, 2022
A quick experiment to demonstrate Metamath formula parsing, where the grammar is embedded in a few additional 'syntax axioms'.

Warning: Hacked-up code ahead. (But it seems to work...) What it does This demonstrates an idea which I posted about several times on the Metamath mai

Marnix Klooster 1 Oct 21, 2021
RestMapper takes the pain out of integrating with RESTful APIs.

python-restmapper RestMapper takes the pain out of integrating with RESTful APIs. It removes all of the complexity with writing API-specific code, and

Lionheart Software 8 Oct 31, 2020
A simple but fully functional calculator that will take multiple operations.

Functional-Calculator A simple but fully functional calculator that will take multiple operations. Usage Run the following command through terminal: p

Uzziel Ariel 1 Dec 22, 2022
Clock in automatically in SCU.

auto_clock_in Clock in automatically in SCU. Features send logs to Telegram bot How to use? pip install -r requirements.txt () edit user_list, token_A

2 Dec 13, 2021
Tiny demo site for exploring SameSite=Lax

samesite-lax-demo Background on my blog: Exploring the SameSite cookie attribute for preventing CSRF This repo holds some tools for exploring the impl

Simon Willison 6 Nov 10, 2021
PythonKafkaCompose is an upgrade of the amazing work done in liveMaps

PythonKafkaCompose is an upgrade of the amazing work done in liveMaps It is a simple project composed by: an instance of Kafka a Py

5 Jun 19, 2022
Ingest openldap data into bloodhound

Bloodhound for Linux Ingest a dumped OpenLDAP ldif into neo4j to be visualized in Bloodhound. Usage: ./ldif_to_neo4j.py ./sample.ldif | cypher-shell -

Guillaume Quéré 71 Nov 09, 2022