• Isadora
  • Get it
  • Forum
  • Help
  • ADD-ONS
  • Newsletter
  • Impressum
  • Dsgvo
  • Impressum
Forum
    • Categories
    • Recent
    • Popular
    • Tags
    • Register
    • Login
    1. Home
    2. Tags
    3. background
    Log in to post
    • All categories
    • BennnidB

      change background without chromakey, but using empty picture as a reference?

      How To... ?
      • background effect incrustation • • Bennnid
      10
      0
      Votes
      10
      Posts
      671
      Views

      BennnidB

      @dusx

      i see, so with time and good choices 😉  I have plenty to test now ....

      Although I'm not sure I can get the same smooth effect around the character that some zoom.us background do... digging with alpha mask seems a good solution !

    • WhatnaoW

      [ANSWERED] Run a video throughout the show while changing the scenes

      How To... ?
      • background scene scenes secondary scene video playback • • Whatnao
      13
      0
      Votes
      13
      Posts
      858
      Views

      F

      @whatnao You could also run the projector in an activated scene with a keyboard watcher that listens for a trigger and toggles the projector on or off - so the movie keeps running and the projector is what goes on or off. Since the scene is active the keyboard watcher will react even if you are currently in another scene.

    • L

      [LOGGED] Keying Head & Shoulders like in zoom & skype

      Feature Requests
      • background virtual virtual theatre zoom skype • • liannemua
      12
      0
      Votes
      12
      Posts
      1.0k
      Views

      liminal_andyL

      So the following is purely for fun in response to @mark 's post imagining how this would be done. I did follow up on this over the weekend and got something "working"

      I heavily modified the project I mentioned earlier by manually rolling it over to the Tensorflow Lite c_api (a real pain!), then porting it to Windows, and feeding it the deeplabv3_257_mv_gpu.tflite model. To make it useful to Isadora, I dusted off / updated an openCV to Spout pipeline in c++ that I used a few years ago for some of my live projection masking programs, so now my prototype can receive an Isadora stage, run it on the model, and output the resulting mask to Spout again for Isadora to use with the Alpha Mask actor. 

      My results:

      Now obviously, this is insane to actually attempt for production purposes in its current form. I'm getting about 5fps (granted no GPU accelerated and I'm running in debug mode). I could slightly improve things by bouncing back the original Isadora stage on its own spout server, but this is just a proof of concept. In this state, it should be relatively easy to port to Mac/Syphon and add GPU acceleration on compatible systems for higher FPS and / or multiple instances for many performers. 


      Again, just a fun weekend project but I found it very educational. 

    • V

      [ANSWERED] How to superimpose a video image with background removed on a picture

      How To... ?
      • background alpha mask freeze live feed green screen • • vjw
      3
      0
      Votes
      3
      Posts
      731
      Views

      markM

      @vjw said:

      If anyone has done this, I'd appreciate some direction.

      Dear Valerie,

      Please review this post for possible solutions.

      Best Wishes,
      Mark

    • P

      [ANSWERED] Webcam background removal

      How To... ?
      • background freeze actors live feed • • photogramdude
      4
      0
      Votes
      4
      Posts
      874
      Views

      P

      Hi, those are great solutions and I didn't realise Isadora could do that. I have some ideas regarding linking multiple eyes together to lock on to the eye shape and then put a cluedo cutout over with a gaussian mask. Have attempted a couple of versions attached sans eyes prior to this. Obv foreground faces not always more luminant than background and a variety of other problems inherent in a non machine vision approach. ndi fakebg.izzndi fakebg2.izz

      Ultimately for our application we're going for zero user requirements so trying to avoid asking the user to do anything. However, it could be possible to gamify this step in the interim as an onboarding... Thanks!

    • D1gitsD

      Key - Live video

      How To... ?
      • photo booth background keying • • D1gits
      11
      0
      Votes
      11
      Posts
      5.5k
      Views

      DusXD

      regarding the kinect v2 keyed output. Although the camera captures 1920x1080 video the depth sensor is 512 x 424. Therefore the keyed feed matches the depth feed and is 512 x 424.  That is a spout feed with alpha. You are correct, this app will only run on PC.