a generic C++ library for image analysis

Related tags

Deep Learningvigra
Overview

VIGRA Computer Vision Library

Build Status

            Copyright 1998-2013 by Ullrich Koethe


This file is part of the VIGRA computer vision library.
You may use, modify, and distribute this software according
to the terms stated in the LICENSE.txt file included in
the VIGRA distribution.

The VIGRA Website is
    http://ukoethe.github.io/vigra/
Please direct questions, bug reports, and contributions to
    [email protected]    or
    [email protected]


THIS SOFTWARE IS PROVIDED AS IS AND WITHOUT ANY EXPRESS OR
IMPLIED WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE IMPLIED
WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR A PARTICULAR PURPOSE.

Installation

Installation instructions can be found in the file

$VIGRA_PATH/doc/vigra/Installation.html

If the documentation has not yet been generated (e.g. when you build from a development snapshot), you find these instructions in

$VIGRA_PATH/docsrc/installation.dxx

or online at http://ukoethe.github.io/vigra/doc-release/vigra/Installation.html

Documentation

If you downloaded an official release, the documentation can be found in $VIGRA_PATH/doc/vigra/, the start file is $VIGRA_PATH/doc/vigra/index.html or online at http://ukoethe.github.io/vigra/#documentation.

When you use the development version from github, you can generate documentation by make doc.

Download

VIGRA can be downloaded at http://ukoethe.github.io/vigra/#download. The official development repository is at https://github.com/ukoethe/vigra

What is VIGRA

VIGRA is a computer vision library that puts its main emphasis on flexible algorithms, because algorithms represent the principal know-how of this field. The library was consequently built using generic programming as introduced by Stepanov and Musser and exemplified in the C++ Standard Template Library. By writing a few adapters (image iterators and accessors) you can use VIGRA's algorithms on top of your data structures, within your environment. Alternatively, you can also use the data structures provided within VIGRA, which can be easily adapted to a wide range of applications. VIGRA's flexibility comes almost for free: Since the design uses compile-time polymorphism (templates), performance of the compiled program approaches that of a traditional, hand tuned, inflexible, solution.

Comments
  • labelVolume segfault with gcc-4.8.1

    labelVolume segfault with gcc-4.8.1

    The labelVolume segmentation fault and general strange behaviour of the labelVolume* functions is due to an over-optimization by gcc-4.8.1 with -O3.

    Solution set the flag to -O2.


    With the recent libboost, I'm having segmentation faults in labelVolumeWithBackground() when I provide certain input volumes with a specific combination of axistags and ordering. Strangely, labelVolume() is not affected.

    #!/usr/bin/python2
    
    import vigra
    import numpy as np
    
    print("Testing for segfaults")
    
    X = vigra.VigraArray(np.zeros((3,20,50)), axistags=vigra.defaultAxistags('zyx'))
    labels = vigra.analysis.labelVolumeWithBackground(X)
    print("np.zeros worked")
    
    X = vigra.VigraArray(np.ones((3,20,50), dtype=np.uint8), axistags=vigra.defaultAxistags('xyz'))
    labels = vigra.analysis.labelVolumeWithBackground(X)
    print("XYZ (order=C) worked.")
    
    X = vigra.VigraArray(np.ones((3,20,50), dtype=np.uint8), order='F', axistags=vigra.defaultAxistags('zyx'))
    labels = vigra.analysis.labelVolumeWithBackground(X)
    print("ZYX (order=F) worked.") 
    
    X = vigra.VigraArray(np.ones((3,20,50), dtype=np.uint8), order='C', axistags=vigra.defaultAxistags('zyx'))
    labels = vigra.analysis.labelVolume(X)
    print("ZYX (order=C, labelVolume() )worked.") 
    
    
    
    # this example fails with segmentation fault 
    # Linux burger-Desktop 3.9.9-1-ARCH #1 SMP PREEMPT Wed Jul 3 22:45:16 CEST 2013 x86_64 GNU/Linux
    # LIBS:
    #       python2 2.7.5-1
    #       boost-libs 1.54.0-2
    
    X = vigra.VigraArray(np.ones((3,20,50), dtype=np.uint8), order='C', axistags=vigra.defaultAxistags('zyx'))
    labels = vigra.analysis.labelVolumeWithBackground(X)
    print("You fixed it.") 
    
    Output:
    
    Testing for segfaults
    np.zeros worked
    XYZ (order=C) worked.
    ZYX (order=F) worked.
    ZYX (order=C, labelVolume() )worked.
    Segmentation fault (core dumped)
    
    Backtrace:
    
    #0  0x00007fffef7e5115 in unsigned int vigra::labelVolumeWithBackground<vigra::StridedMultiIterator<3u, float, float const&, float const*>, vigra::StandardConstValueAccessor<float>, vigra::TinyVector<long, 3>, vigra::StridedMultiIterator<3u, unsigned int, unsigned int&, unsigned int*>, vigra::StandardValueAccessor<unsigned int>, vigra::Neighborhood3DSix::NeighborCode3D, float, std::equal_to<float> >(vigra::StridedMultiIterator<3u, float, float const&, float const*>, vigra::TinyVector<long, 3>, vigra::StandardConstValueAccessor<float>, vigra::StridedMultiIterator<3u, unsigned int, unsigned int&, unsigned int*>, vigra::StandardValueAccessor<unsigned int>, vigra::Neighborhood3DSix::NeighborCode3D, float, std::equal_to<float>) () from /usr/lib/python2.7/site-packages/vigra/analysis.so
    #1  0x00007fffef7efee5 in vigra::NumpyAnyArray vigra::pythonLabelVolumeWithBackground<float>(vigra::NumpyArray<3u, vigra::Singleband<float>, vigra::StridedArrayTag>, int, float, vigra::NumpyArray<3u, vigra::Singleband<unsigned int>, vigra::StridedArrayTag>) () from /usr/lib/python2.7/site-packages/vigra/analysis.so
    #2  0x00007fffef851c40 in boost::python::detail::caller_arity<4u>::impl<vigra::NumpyAnyArray (*)(vigra::NumpyArray<3u, vigra::Singleband<float>, vigra::StridedArrayTag>, int, float, vigra::NumpyArray<3u, vigra::Singleband<unsigned int>, vigra::StridedArrayTag>), boost::python::default_call_policies, boost::mpl::vector5<vigra::NumpyAnyArray, vigra::NumpyArray<3u, vigra::Singleband<float>, vigra::StridedArrayTag>, int, float, vigra::NumpyArray<3u, vigra::Singleband<unsigned int>, vigra::StridedArrayTag> > >::operator()(_object*, _object*) () from /usr/lib/python2.7/site-packages/vigra/analysis.so
    #3  0x00007ffff656444a in boost::python::objects::function::call(_object*, _object*) const () from /usr/lib/libboost_python.so.1.54.0
    #4  0x00007ffff65647b8 in ?? () from /usr/lib/libboost_python.so.1.54.0
    #5  0x00007ffff656e4e3 in boost::python::handle_exception_impl(boost::function0<void>) () from /usr/lib/libboost_python.so.1.54.0
    #6  0x00007ffff6562f73 in ?? () from /usr/lib/libboost_python.so.1.54.0
    #7  0x00007ffff7a63c13 in PyObject_Call () from /usr/lib/libpython2.7.so.1.0
    #8  0x00007ffff7af39e1 in PyEval_EvalFrameEx () from /usr/lib/libpython2.7.so.1.0
    #9  0x00007ffff7af8290 in PyEval_EvalCodeEx () from /usr/lib/libpython2.7.so.1.0
    #10 0x00007ffff7af8392 in PyEval_EvalCode () from /usr/lib/libpython2.7.so.1.0
    #11 0x00007ffff7b1108f in run_mod () from /usr/lib/libpython2.7.so.1.0
    #12 0x00007ffff7b121ae in PyRun_FileExFlags () from /usr/lib/libpython2.7.so.1.0
    #13 0x00007ffff7b13319 in PyRun_SimpleFileExFlags () from /usr/lib/libpython2.7.so.1.0
    #14 0x00007ffff7b23c1f in Py_Main () from /usr/lib/libpython2.7.so.1.0
    #15 0x00007ffff7472a15 in __libc_start_main () from /usr/lib/libc.so.6
    #16 0x0000000000400741 in _start ()
    
    opened by burgerdev 24
  • python: Added applyMapping() function

    python: Added applyMapping() function

    Here's the docstring:

    applyMapping( (object)labels, (dict)mapping [, (bool)allow_incomplete_mapping=False [, (object)out=None]]) -> object :
        Map all values in `labels` to new values using the given mapping (a dict).
        Useful for maps with large values, for which a numpy index array would need too much RAM.
        To relabel in-place, set `out=labels`.
    
        Parameters
        ----------
        labels: ndarray
        mapping: dict of ``{old_label : new_label}``
        allow_incomplete_mapping: If True, then any voxel values in the original data that are missing
                                  from the mapping dict will be copied (and casted) into the output.
                                  Otherwise, an ``IndexError`` will be raised if the map is incomplete
                                  for the input data.
        out: ndarray to hold the data. If None, it will be allocated for you.
             The dtype of ``out`` is allowed to be smaller (or bigger) than the dtype of ``labels``.
    
        Note: As with other vigra functions, you should provide accurate axistags for optimal performance.
    

    Edit: This PR uses a C++11 lambda. If that's not acceptable (until vigra 1.11), let me know.

    opened by stuarteberg 20
  • some vigranumpy test segfaults on Mac

    some vigranumpy test segfaults on Mac

    I am getting segfaults from a fresh vigranumpy install using OS X Lion and MacPorts’ python 2.7.

    make check gave me:

    executing test file /Users/hans/uni/KOGS/vigra/build/vigranumpy/test/test1.pyc
    ./run_vigranumpytest.sh: line 1: 28131 Segmentation fault: 11  /opt/local/bin/python -c "import nose; nose.main()" .
    make[3]: *** [vigranumpy/test/vigranumpytest.so] Error 1
    

    Interestingly, make -k reveals that all other tests pass successfully!

    opened by hmeine 16
  • Enhancements to unique() and relabelConsecutive()

    Enhancements to unique() and relabelConsecutive()

    Here are two tiny changes to make it easier for users to replace the horribly slow numpy.unique() with vigra.analysis.unique():

    • Support int64. (This is more commonly needed than I originally thought.)
    • Sort the output by default, like numpy.unique() does.
    opened by stuarteberg 15
  • permutationToNormalOrder() fails

    permutationToNormalOrder() fails

    For the latest build of vigra master on windows 7, Visual Studio 2008 64bit, the check_python test fails with:

    11>executing test file c:\vigra\build\vigranumpy\test\test_impex.pyc 11>E 11>====================================================================== 11>ERROR: Failure: RuntimeError (exceptions.ValueError: permutationToNormalOrder() did not return a sequence of int.) 11>---------------------------------------------------------------------- 11>Traceback (most recent call last): 11> File "C:\Python26\lib\site-packages\nose-1.0.0-py2.6.egg\nose\loader.py", line 390, in loadTestsFromName 11> addr.filename, addr.module) 11> File "C:\Python26\lib\site-packages\nose-1.0.0-py2.6.egg\nose\importer.py", line 39, in importFromPath 11> return self.importFromDir(dir_path, fqname) 11> File "C:\Python26\lib\site-packages\nose-1.0.0-py2.6.egg\nose\importer.py", line 86, in importFromDir 11> mod = load_module(part_fqname, fh, filename, desc) 11> File "c:\vigra\build\vigranumpy\test\test1.py", line 47, in 11> img_rgb_f = at.RGBImage(np.random.rand(100,200,3)*255,dtype=np.float32) 11> File "C:\vigra\build\vigranumpy\vigra\arraytypes.py", line 1568, in RGBImage 11> res = VigraArray(obj, dtype, None, init, value, axistags) 11> File "C:\vigra\build\vigranumpy\vigra\arraytypes.py", line 398, in new 11> res = _constructArrayFromAxistags(cls, obj.shape, dtype, axistags, init) 11>RuntimeError: exceptions.ValueError: permutationToNormalOrder() did not return a sequence of int.

    bug showstopper 
    opened by akreshuk 15
  • Travis CI Python 3.x fixes

    Travis CI Python 3.x fixes

    This appears to do the trick. Turns out this was never picking up the right Python library in the first place.

    However, I have found running the following in python points me to the correct directory for the python library.

    from distutils.sysconfig import get_config_var
    get_config_var("LIBDIR")
    

    This doesn't tell me the library itself, but it does tell me where to look. At that point, we can use the existing search method to find the library.

    Setting the correct Boost.Python library was simply a matter of searching for python-pyXY instead of python; where, Python's major version is X and minor version is Y.

    After some more experimentation, it became clear that we didn't have Boost.Python support for 3.5 or 3.4 on Travis CI. Instead, as we have Boost 1.46.1, we only have support for Python 3.2.

    Switching to Python 3.2 made a small issue with unicode support apparent. This required the addition of a small bug fix.

    opened by jakirkham 12
  • Porting to Python 3

    Porting to Python 3

    Fortunately, Guido punted this necessary move from 2015 to 2020. ( https://hg.python.org/peps/rev/76d43e52d978 ) However, at some point, this move will need to occur. If this is very complex, it may be worth trying to understand what makes it complex and what it will take to remedy it. Already some packages don't support Python 2.x.

    opened by jakirkham 12
  • Tests don't succeed with VC14

    Tests don't succeed with VC14

    We have been trying prereleases of VC14, and there were always problems with vigra's tests, but I hoped that they would vanish when the compiler becomes less buggy. Now the final version was released, and the errors with vigra are nearly the only ones that persisted. :-( So far, I am not sure whether it's a plain bug in VC14, or whether this new compiler just reveals bugs that went unnoticed for a long time, although I would suspect the former to have a higher probability.

    The following tests fail with a corrupted heap:

    • classifier_speed_comparison
    • test_simpleanalysis
    • test_convolution
    • test_classifier
    • test_slic2d

    I had a closer look at classifier_speed_comparison:

    • The problem occurs in the old, deprecated implementation, so it can be much faster reproduced by commenting out the use of the new implementation.
    • The crash happens in ArrayVector's destructor. (However, the heap corruption could be caused by something else.)
    opened by hmeine 12
  • Linking error with vigranumpy on Windows

    Linking error with vigranumpy on Windows

    Running into some sort of linking error on Windows. Looks like it is related to vigranumpy and the HDF5 interface. It seems to be finding all the HDF5 libraries fine. Not entirely sure what is causing the issue.

    opened by jakirkham 11
  • Pythonbindings and improved prediction for Random Forest 3.

    Pythonbindings and improved prediction for Random Forest 3.

    This PR includes:

    • Fixing some issues that prevented efficient parallel prediction of RF3.
    • Pythonbindings for RF3
    • Allowing to instantiate RF3 with any MultiArray that supports the vigra::MultiArray API.

    For benchmarking of RF3, RF2 and sklearn RF see: https://github.com/constantinpape/rf_benchmarks

    opened by constantinpape 11
  • vigra master do not build on mac os x mountain lion

    vigra master do not build on mac os x mountain lion

    Vigra master do not currently compile on mac os Mountain Lion 10.8 with Xcode 4 provided gcc (version 4.2 apple modified)

    the error message ends at:

    /Users/lfiaschi/phd/workspace/vigra-github/vigranumpy/src/core/accumulator.cxx:130: instantiated from here /Users/lfiaschi/phd/workspace/vigra-github/include/vigra/accumulator.hxx:2873: error: no matching function for call to ‘get(const vigra::CoupledHandlevigra::Multiband<float, vigra::CoupledHandle<vigra::TinyVector<long int, 3>, void> >&)’ make[2]: *** [vigranumpy/src/core/CMakeFiles/vigranumpy_analysis.dir/accumulator.cxx.o] Error 1 make[1]: *** [vigranumpy/src/core/CMakeFiles/vigranumpy_analysis.dir/all] Error 2 make: *** [all] Error 2

    bug 
    opened by lfiaschi 11
  • builderror with numpy >=1.22 [IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices]

    builderror with numpy >=1.22 [IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices]

    Hello,

    Upgrading numpy in Debian from 1.21.5 to 1.23.5-2 triggered a build error https://bugs.debian.org/1026487

    executing test file /dev/shm/VIGRA/libvigraimpex-1.11.1/obj.x86_64-linux-gnu/vigranumpy/test/test_arraytypes.py
    .EEEEEEEEEEEEEEE.....EE.
    [...]
    ERROR: test_arraytypes.testImage1
    ----------------------------------------------------------------------
    Traceback (most recent call last):
      File "/dev/shm/VIGRA/libvigraimpex-1.11.1/obj.x86_64-linux-gnu/vigranumpy/vigra/arraytypes.py", line 1271, in __getitem__
        res = numpy.ndarray.__getitem__(self, index)
    IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/lib/python3/dist-packages/nose/case.py", line 197, in runTest
        self.test(*self.arg)
      File "/dev/shm/VIGRA/libvigraimpex-1.11.1/obj.x86_64-linux-gnu/vigranumpy/test/test_arraytypes.py", line 551, in testImage1
        checkArray(arraytypes.Image, 1, 2)
      File "/dev/shm/VIGRA/libvigraimpex-1.11.1/obj.x86_64-linux-gnu/vigranumpy/test/test_arraytypes.py", line 169, in checkArray
        assert_equal(img.view5D('F').axistags, axistags5)
      File "/dev/shm/VIGRA/libvigraimpex-1.11.1/obj.x86_64-linux-gnu/vigranumpy/vigra/arraytypes.py", line 1153, in view5D
        return self[index].transposeToOrder(order)
      File "/dev/shm/VIGRA/libvigraimpex-1.11.1/obj.x86_64-linux-gnu/vigranumpy/vigra/arraytypes.py", line 1277, in __getitem__
        res = numpy.ndarray.__getitem__(self, tmpindex)
    IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
    

    cu Andreas

    opened by ametzler 0
  • Windows compilation error with MSVC >= 14.28

    Windows compilation error with MSVC >= 14.28

    I've been tracking compilation issues with VS2019 compilers (I was using 14.29, but I've found that the problem was most likely introduced in MSVC 14.28.

    I haven't found a solution (yet) but also don't want to forget everything the next time I get to it (or someone else looks into it).

    The symptom:

    following compilation error:

    in the end the error occurs in MSVC STL: algorithm(6928): error C3892: '_First': you cannot assign to a variable that is const

    more details here, note I shortened the paths a little for readability:

    ..\BuildTools\VC\Tools\MSVC\14.29.30133\include\algorithm(6928): error C3892: '_First': you cannot assign to a variable that is const
    ...\BuildTools\VC\Tools\MSVC\14.29.30133\include\algorithm(7050): note: see reference to function template instantiation '_BidIt std::_Insertion_sort_unchecked<_RanIt,_Pr>(const _BidIt,const _BidIt,_Pr)' being compiled
            with
            [
                _BidIt=vigra::StridedScanOrderIterator<1,vigra::UInt8,vigra::UInt8 &,vigra::UInt8 *>,
                _RanIt=vigra::StridedScanOrderIterator<1,vigra::UInt8,vigra::UInt8 &,vigra::UInt8 *>,
                _Pr=std::less<void>
            ]
    ...\BuildTools\VC\Tools\MSVC\14.29.30133\include\algorithm(7080): note: see reference to function template instantiation 'void std::_Sort_unchecked<_RanIt,_Fn>(_RanIt,_RanIt,__int64,_Pr)' being compiled
            with
            [
                _RanIt=vigra::StridedScanOrderIterator<1,vigra::UInt8,vigra::UInt8 &,vigra::UInt8 *>,
                _Fn=std::less<void>,
                _Pr=std::less<void>
            ]
    ...\BuildTools\VC\Tools\MSVC\14.29.30133\include\algorithm(7085): note: see reference to function template instantiation 'void std::sort<_RanIt,std::less<void>>(const _RanIt,const _RanIt,_Pr)' being compiled
            with
            [
                _RanIt=vigra::StridedScanOrderIterator<1,vigra::UInt8,vigra::UInt8 &,vigra::UInt8 *>,
                _Pr=std::less<void>
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1176): note: see reference to function template instantiation 'void std::sort<vigra::StridedScanOrderIterator<1,T,T &,T *>>(const _RanIt,const _RanIt)' being compiled
            with
            [
                T=vigra::UInt8,
                _RanIt=vigra::StridedScanOrderIterator<1,vigra::UInt8,vigra::UInt8 &,vigra::UInt8 *>
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1181): note: see reference to function template instantiation 'vigra::NumpyAnyArray vigra::pythonUnique<T,1>(vigra::NumpyArray<1,vigra::Singleband<T>,vigra::StridedArrayTag>,bool)' being compiled
            with
            [
                T=npy_uint8
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1181): note: see reference to function template instantiation 'void vigra::pyUniqueImpl<T,1,1>::def<Args>(const char *,const Args &)' being compiled
            with
            [
                T=npy_uint8,
                Args=boost::python::detail::keywords<2>
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1181): note: see reference to function template instantiation 'void vigra::pyUniqueImpl<T,1,1>::def<Args>(const char *,const Args &)' being compiled
            with
            [
                T=npy_uint8,
                Args=boost::python::detail::keywords<2>
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1181): note: see reference to function template instantiation 'void vigra::pyUniqueImpl<T1,1,5>::def<Args>(const char *,const Args &,const char *)' being compiled
            with
            [
                T1=npy_uint8,
                Args=boost::python::detail::keywords<2>
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1181): note: see reference to function template instantiation 'void vigra::pyUniqueImpl<T1,1,5>::def<Args>(const char *,const Args &,const char *)' being compiled
            with
            [
                T1=npy_uint8,
                Args=boost::python::detail::keywords<2>
            ]
    ...\vigra\include\vigra/numpy_array_converters.hxx(803): note: see reference to function template instantiation 'void vigra::pyUnique<1,5,npy_uint8,npy_uint32,npy_uint64,npy_int64,void,void,void,void,void,void,void,void>::def<Args>(const char *,const Args &,const char *) const' being compiled
            with
            [
                Args=boost::python::detail::keywords<2>
            ]
    ...\vigra\include\vigra/numpy_array_converters.hxx(803): note: see reference to function template instantiation 'void vigra::pyUnique<1,5,npy_uint8,npy_uint32,npy_uint64,npy_int64,void,void,void,void,void,void,void,void>::def<Args>(const char *,const Args &,const char *) const' being compiled
            with
            [
                Args=boost::python::detail::keywords<2>
            ]
    ...\vigra\vigranumpy\src\core\segmentation.cxx(1618): note: see reference to function template instantiation 'void boost::python::multidef<vigra::pyUnique<1,5,npy_uint8,npy_uint32,npy_uint64,npy_int64,void,void,void,void,void,void,void,void>,boost::python::detail::keywords<2>>(const char *,const Functor &,const Args &,const char *)' being compiled
            with
            [
                Functor=vigra::pyUnique<1,5,npy_uint8,npy_uint32,npy_uint64,npy_int64,void,void,void,void,void,void,void,void>,
                Args=boost::python::detail::keywords<2>
            ]
    

    I found a discussion related to the above error here: https://developercommunity.visualstudio.com/t/VS2019-1684---c-const-issue:-error-C/1320890#T-N1325280

    Bottom line seems to be that the vigra iterator implementation is "wrong" and de-referencing a const iterator should not return a const reference.

    I followed the trail a bit to

    https://github.com/ukoethe/vigra/blob/574de5743342f67408e015823d375b1c00281630/include/vigra/multi_iterator.hxx#L312-L315

    and

    https://github.com/ukoethe/vigra/blob/574de5743342f67408e015823d375b1c00281630/include/vigra/multi_iterator_coupled.hxx#L661-L666

    , and finally to

    https://github.com/ukoethe/vigra/blob/574de5743342f67408e015823d375b1c00281630/include/vigra/multi_handle.hxx#L932-L940

    .

    As far as I understand the problem, the solution should be removing the const from the returned reference. However, I didn't manage to do it in a way that would compile.

    opened by k-dominik 1
  • Look for a better boost::python CMake macro

    Look for a better boost::python CMake macro

    I always have to pass a lot of explicit variable definitions to CMake and wonder if there are better configuration macros nowadays.

    From what I can see, we introduced quite some custom configuration code back then in order to work around limitations of the default macros coming with CMake, but CMake has improved a lot in the meantime.

    For instance, there is this comment:

        # 'FIND_PACKAGE(Boost COMPONENTS python)' is unreliable because it often selects
        # boost_python for the wrong Python version
    

    followed by 160 LOC of a workaround, and the python libraries themselves are also detected with custom routines, following

        #  'FIND_PACKAGE(PythonLibs)' is unreliable because results are often inconsistent
        #  with the Python interpreter found previously (e.g. libraries or includes
        #  from incompatible installations). Thus, we ask Python itself for the information.
    

    When looking at https://cmake.org/cmake/help/latest/module/FindPythonLibs.html, it nowadays states

    If calling both find_package(PythonInterp) and find_package(PythonLibs), call find_package(PythonInterp) first to get the currently active Python version by default with a consistent version of PYTHON_LIBRARIES.

    indicating that that should no longer be necessary. Furthermore, PythonInterp+PythonLibs are deprecated in favor of Python3 (requiring CMake 3.12, which is not much more than 3.10 which we have been specifying as minimum version since March 2021).

    Finally, https://cmake.org/cmake/help/latest/module/FindBoost.html has an example section that suggests that one can also specify the required python version:

    find_package(Boost 1.67 REQUIRED COMPONENTS
                 python36 numpy36)
    

    so hopefully all previous issues with the default scripts are solved.

    opened by hmeine 1
  • Revival checklist

    Revival checklist

    1. [ ] Ensure the CIs are working again on with semi-modern versions of compilers + python
      • [ ] linux
      • [ ] OSX
      • [ ] Windows
    2. [ ] Update changelog with recent PRs.
    3. [ ] Fix matlab compilation: https://github.com/ukoethe/vigra/pull/513
    4. [ ] Release new version
    5. [ ] Address simple warnings in the compilation
    6. [ ] nose->pytest https://github.com/ukoethe/vigra/issues/508
    7. [ ] deprecated functions https://github.com/ukoethe/vigra/issues/505
    8. [ ] https://github.com/ukoethe/vigra/pull/511
    9. [ ] https://github.com/ukoethe/vigra/pull/509
    10. [ ] https://github.com/ukoethe/vigra/pull/510
    11. [ ] address https://github.com/ukoethe/vigra/issues/482
    12. [ ] Release new version
    opened by hmaarrfk 0
Releases(Version-1-11-1)
  • Version-1-11-1(May 19, 2017)

  • Version-1-11-0(Mar 17, 2016)

    Changes from Version 1.10.0 to 1.11.0

    • Ported vigranumpy to Python 3.5.
    • Added chunked arrays to store data larger than RAM as a collection of rectangular blocks.
    • Added vigra::ThreadPool and parallel_foreach() for portable algorithm parallelization based on std::thread.
    • Implemented parallel versions of Gaussian smoothing, Gaussian derivatives, connected components labeling, and union-find watersheds.
    • Added graph-based image analysis, e.g. agglomerative clustering
    • Included the callback mechanism described in Impossibly Fast C++ Delegates by Sergey Ryazanov (needed for agglomerative clustering).
    • Added many image registration functions.
    • Extended the collection of multi-dimensional distance transform algorithms by vectorial DT, boundary DT, and eccentricity transform.
    • Added skeletonizeImage(), nonLocalMean(), multi-dimensional integral images.
    • Added new 2D shape features based on skeletonization and the convex hull.
    • Additional arithmetic and algebraic functions for vigra::TinyVector.
    • Added vigra::CountingIterator.
    • Minor improvements and bug fixes in the code and documentation.
    Source code(tar.gz)
    Source code(zip)
    vigra-1.11.0-src.tar.gz(49.45 MB)
    vigra-1.11.0-win64-vc14.zip(57.13 MB)
  • Version-1-10-0(Nov 26, 2013)

    Changes from Version 1.9.0 to 1.10.0

    • VIGRA got a tutorial.
    • Significant simplification of the API: MultiArrayView arguments can now be passed to functions directly. The old syntax with Argument Object Factories (srcImageRange(), srcMultiArray() and relatives) remains valid, but is only required when the arguments are old-style BasicImages.
    • Made StridedArrayTag the default for vigra::MultiArrayView .
    • Added an efficient multi-dimensional vigra::GridGraph class which support both the LEMON and boost::graph APIs.
    • Generalized various algorithms to arbitrary dimensions (gaussianGradientMultiArray(), hessianOfGaussianMultiArray(), gaussianDivergenceMultiArray(), localMinima(), localMaxima(), labelMultiArray(), watershedsMultiArray()).
    • Added slicSuperpixels() for arbitrary dimensions.
    • Added automatic differentiation (see vigra::autodiff::DualVector).
    • Added nonlinearLeastSquares() using the Levenberg-Marquardt algorithm and automatic differentiation. More information about the changes can be found on the changelog page.
    Source code(tar.gz)
    Source code(zip)
    vigra-1.10.0-src-with-docu.tar.gz(34.44 MB)
    vigra-1.10.0-win64.exe(11.69 MB)
    vigranumpy-1.10.0.win-amd64.exe(11.65 MB)
Implementation of Monocular Direct Sparse Localization in a Prior 3D Surfel Map (DSL)

DSL Project page: https://sites.google.com/view/dsl-ram-lab/ Monocular Direct Sparse Localization in a Prior 3D Surfel Map Authors: Haoyang Ye, Huaiya

Haoyang Ye 93 Nov 30, 2022
Molecular AutoEncoder in PyTorch

MolEncoder Molecular AutoEncoder in PyTorch Install $ git clone https://github.com/cxhernandez/molencoder.git && cd molencoder $ python setup.py insta

Carlos Hernández 80 Dec 05, 2022
Tool for working with Y-chromosome data from YFull and FTDNA

ycomp ycomp is a tool for working with Y-chromosome data from YFull and FTDNA. Run ycomp -h for information on how to use the program. Installation Th

Alexander Regueiro 2 Jun 18, 2022
Classification Modeling: Probability of Default

Credit Risk Modeling in Python Introduction: If you've ever applied for a credit card or loan, you know that financial firms process your information

Aktham Momani 2 Nov 07, 2022
Automatically erase objects in the video, such as logo, text, etc.

Video-Auto-Wipe Read English Introduction:Here   本人不定期的基于生成技术制作一些好玩有趣的算法模型,这次带来的作品是“视频擦除”方向的应用模型,它实现的功能是自动感知到视频中我们不想看见的部分(譬如广告、水印、字幕、图标等等)然后进行擦除。由于图标擦

seeprettyface.com 141 Dec 26, 2022
Agile SVG maker for python

Agile SVG Maker Need to draw hundreds of frames for a GIF? Need to change the style of all pictures in a PPT? Need to draw similar images with differe

SemiWaker 4 Sep 25, 2022
Official Pytorch Implementation of GraphiT

GraphiT: Encoding Graph Structure in Transformers This repository implements GraphiT, described in the following paper: Grégoire Mialon*, Dexiong Chen

Inria Thoth 80 Nov 27, 2022
A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization

sam.pytorch A PyTorch implementation of Sharpness-Aware Minimization for Efficiently Improving Generalization ( Foret+2020) Paper, Official implementa

Ryuichiro Hataya 102 Dec 28, 2022
Designing a Practical Degradation Model for Deep Blind Image Super-Resolution (ICCV, 2021) (PyTorch) - We released the training code!

Designing a Practical Degradation Model for Deep Blind Image Super-Resolution Kai Zhang, Jingyun Liang, Luc Van Gool, Radu Timofte Computer Vision Lab

Kai Zhang 804 Jan 08, 2023
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
OcclusionFusion: realtime dynamic 3D reconstruction based on single-view RGB-D

OcclusionFusion (CVPR'2022) Project Page | Paper | Video Overview This repository contains the code for the CVPR 2022 paper OcclusionFusion, where we

Wenbin Lin 193 Dec 15, 2022
Randstad Artificial Intelligence Challenge (powered by VGEN). Soluzione proposta da Stefano Fiorucci (anakin87) - primo classificato

Randstad Artificial Intelligence Challenge (powered by VGEN) Soluzione proposta da Stefano Fiorucci (anakin87) - primo classificato Struttura director

Stefano Fiorucci 1 Nov 13, 2021
QuALITY: Question Answering with Long Input Texts, Yes!

QuALITY: Question Answering with Long Input Texts, Yes! Authors: Richard Yuanzhe Pang,* Alicia Parrish,* Nitish Joshi,* Nikita Nangia, Jason Phang, An

ML² AT CILVR 61 Jan 02, 2023
Shape Matching of Real 3D Object Data to Synthetic 3D CADs (3DV project @ ETHZ)

Real2CAD-3DV Shape Matching of Real 3D Object Data to Synthetic 3D CADs (3DV project @ ETHZ) Group Member: Yue Pan, Yuanwen Yue, Bingxin Ke, Yujie He

24 Jun 22, 2022
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022
Emotional conditioned music generation using transformer-based model.

This is the official repository of EMOPIA: A Multi-Modal Pop Piano Dataset For Emotion Recognition and Emotion-based Music Generation. The paper has b

hung anna 96 Nov 09, 2022
Object-aware Contrastive Learning for Debiased Scene Representation

Object-aware Contrastive Learning Official PyTorch implementation of "Object-aware Contrastive Learning for Debiased Scene Representation" by Sangwoo

43 Dec 14, 2022
OHLC Average Prediction of Apple Inc. Using LSTM Recurrent Neural Network

Stock Price Prediction of Apple Inc. Using Recurrent Neural Network OHLC Average Prediction of Apple Inc. Using LSTM Recurrent Neural Network Dataset:

Nouroz Rahman 410 Jan 05, 2023
Predicting Auction Sale Price using the kaggle bulldozer auction sales data: Modeling with Ensembles vs Neural Network

Predicting Auction Sale Price using the kaggle bulldozer auction sales data: Modeling with Ensembles vs Neural Network The performances of tree ensemb

Mustapha Unubi Momoh 2 Sep 13, 2022