• Isadora
  • Get it
  • Forum
  • Help
  • ADD-ONS
  • Newsletter
  • Impressum
  • Dsgvo
  • Impressum
Forum
    • Categories
    • Recent
    • Popular
    • Tags
    • Register
    • Login

    OpenNI Tracker tutorial suggestions

    How To... ?
    2
    5
    525
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • bonemapB
      bonemap Izzy Guru
      last edited by bonemap

      Hi @mark,

      Congratulations on the work you have done to bring the Isadora OpenNi Tracker module out. The tutorial resource is a great companion with great templates for skeleton tracking. Thank you for inviting comments and suggestions for additional tutorial segments. The forum thread that you have used to invite comments does not appear to allow replies, so I am commenting here... One thing I find remarkable about depth cameras are their ability to calibrate depth. It would be of great benefit to see a demonstration of some of the ways that points in physical space can be defined as zones for triggering or modifying. These points are in the z plane of a space, but could be anywhere in the vertical and horizontal. How do you establish a 3 dimensional network of hotspot points from the depth image? A participant moving through and around the tracking space might then register a position within Isadora. This position and volumetric data might then be used to modify spatial effects, such as lighting or activate an object based on participant proximity.

      I am sure there are any number fantastic tutorial possibilities concerning the capabilities of the OpenNI Tracker that will be of interest for many users. The tutorial I have suggested is a familiar use for depth cameras in an installation or participatory setup.

      Best Wishes

      Russell

      http://bonemap.com | Australia
      Izzy STD/USB 3.2.6 | + Beta 3.x.x
      MBP 16” 2019 2.4 GHz Intel i9 64GB AMD Radeon Pro 5500 8 GB 4TB SSD | 14.1.2 Sonoma
      Mac Studio 2023 M2 Ultra 128GB | OSX 14.1.2 Sonoma
      A range of deployable older Macs

      1 Reply Last reply Reply Quote 1
      • SkulptureS
        Skulpture Izzy Guru
        last edited by Skulpture

        Hi. I think there might be a way to do this with the inside range actor and the logical calculator actor. I need some time to work on this though.

        In theory the Z inside range can be found; say the zone (imagine a box) exists in -1 to -2 but the full range is 10 to -10 if a person is inside this range then the inside range actor gives you a 'true' which is 1.

        Then do the same for Y (horizontal) if the box/zone exists in 2 to 4 but the full range is 10 to -10 if a person is inside this range then the inside range actor gives you a 'true' which is 1.

        When the logical calculator has a 1 and 1 it then triggers a 1 - this can then be used to do anything you want. 

        Graham Thorne | www.grahamthorne.co.uk
        RIG 1: Windows 11, AMD 7 Ryzen, RTX3070, 16gig RAM. 2 x M.2 SSD. HD. Lenovo Legion 5 gaming laptop.
        RIG 2: Windows 11, Intel i19 12th Gen. RTX3070ti, 16gig RAM (ddr5), 1x M.2 SSD. UHD DELL G15 Gaming laptop.
        RIG 3: Apple rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

        bonemapB 1 Reply Last reply Reply Quote 2
        • bonemapB
          bonemap Izzy Guru @Skulpture
          last edited by bonemap

          @skulpture said:

          do this with the inside range actor and the logical calculator actor

          Hi Graham,

          That looks really interesting and conceptual, I will have to try it.

          I was thinking in terms of the OpenNI Tracker and what might be possible with the 'depth min cm' and 'depth max cm'.


          There is an indication that limiting the depth range in bands and slices through 3D space will allow a series of simultaneous hotspots to be active. A participant will be detected and represented in the depth image within a band/slice at a defined z distance. The properties are to be calibrated for a corresponding depth slice and distance from the camera. What is not clear is how fast the OpenNI Tracker would allow the band/slice position to cycle and shift these properties to allow simultaneous slices and hotspots to be simulated effectively.

          Thinking about it now it appears a bit complex and fiddly. There might be other ways to think about it?

          Best Wishes

          Russell

          http://bonemap.com | Australia
          Izzy STD/USB 3.2.6 | + Beta 3.x.x
          MBP 16” 2019 2.4 GHz Intel i9 64GB AMD Radeon Pro 5500 8 GB 4TB SSD | 14.1.2 Sonoma
          Mac Studio 2023 M2 Ultra 128GB | OSX 14.1.2 Sonoma
          A range of deployable older Macs

          SkulptureS 1 Reply Last reply Reply Quote 1
          • SkulptureS
            Skulpture Izzy Guru @bonemap
            last edited by

            @bonemap said:

            @skulpture said:
            do this with the inside range actor and the logical calculator actor
            Hi Graham,
            That looks really interesting and conceptual, I will have to try it.
            I was thinking in terms of the OpenNI Tracker and what might be possible with the 'depth min cm' and 'depth max cm'.


            There is an indication that limiting the depth range in bands and slices through 3D space will allow a series of simultaneous hotspots to be active. A participant will be detected and represented in the depth image within a band/slice at a defined z distance. The properties are to be calibrated for a corresponding depth slice and distance from the camera. What is not clear is how fast the OpenNI Tracker would allow the band/slice position to cycle and shift these properties to allow simultaneous slices and hotspots to be simulated effectively.
            Thinking about it now it appears a bit complex and fiddly. There might be other ways to think about it?
            Best Wishes
            Russell

            That could work. I see these as settings rather than a way of dynamic interactive control. My workflow would be to set this up to define the 'active area' of the space, and then use the outputs from the actor to calculate the tracking. 

            I can see your think and it may very well work, but not sure if that's how I would do it. 

            Graham Thorne | www.grahamthorne.co.uk
            RIG 1: Windows 11, AMD 7 Ryzen, RTX3070, 16gig RAM. 2 x M.2 SSD. HD. Lenovo Legion 5 gaming laptop.
            RIG 2: Windows 11, Intel i19 12th Gen. RTX3070ti, 16gig RAM (ddr5), 1x M.2 SSD. UHD DELL G15 Gaming laptop.
            RIG 3: Apple rMBP i7, 8gig RAM 256 SSD, HD, OS X 10.12.12

            bonemapB 1 Reply Last reply Reply Quote 0
            • bonemapB
              bonemap Izzy Guru @Skulpture
              last edited by

              @skulpture said:

              use the outputs from the actor to calculate the tracking

               Hi Graham,

              I think your are probably right. The skeleton tracking provides a z index of a participant while tracking and this could be easily translated into a grid of zones.

              I was really curious about the depth slices I love the way you can calibrate the depth to slice up space.


              best wishes

              Russell

              http://bonemap.com | Australia
              Izzy STD/USB 3.2.6 | + Beta 3.x.x
              MBP 16” 2019 2.4 GHz Intel i9 64GB AMD Radeon Pro 5500 8 GB 4TB SSD | 14.1.2 Sonoma
              Mac Studio 2023 M2 Ultra 128GB | OSX 14.1.2 Sonoma
              A range of deployable older Macs

              1 Reply Last reply Reply Quote 1
              • First post
                Last post