Need help?
<- Back

Comments (74)

  • Springtime
    In an earlier video they made a couple years back about Disney's sodium vapor technique Paul Debevec suggested he was considering creating a dataset using a similar premise: filming enough perfectly masked references to be able to train models to achieve better keying. So it was interesting seeing Corridor tackle this by instead using synthetic data.
  • jayd16
    As far as alternatives, I wonder if anyone has tried a screen that cycles through colors in a known sequence. Using this modulating-color screen, it might actually be easier to separate the subject because you get around the "green shirt over green screen" problem. You might even be able to use a time sampling to correct the light cast on the subject from the screen as you would have a full spectrum of response.I could also imagine using polarized light as the backdrop as well.
  • diacritical
    From ~04:10 till 05:00 they talk about sodium-vapor lights and how Disney has the exclusive rights to use it. From what I read the knowledge on how to make them is a trade secret, so it's not patented. Seems weird that it would be hard to recreate something from the 1950's.I also wonder how many hours were wasted by people who had to use inferior technology because Disney kept it secret. Cutting out animals and objects from the background 1 frame at a time seems so mindnumbingly boring.
  • swframe2
    The model in this repo seems pretty good: https://github.com/xuebinqin/DIS
  • vsviridov
    The community has managed to drastically lower hardware requirements, but so far I think only Nvidia cards are supported, so as an AMD owner I'm still missing out :(
  • comex
    See also this video comparing Corridor Key to traditional keyers:https://www.youtube.com/watch?v=abNygtFqYR8
  • dylan604
    The sad thing about this is the problems encountered during post from the production team saying "fix it post" during the shoot. I've been on set for green screen shoots where the lighting was not done properly. I watched the gaffer walk across the set taking readings from his meter before saying the lighting was good. I flip on the waveform and told him it was not even (which never goes down well when camera dept tells the gaffer it's not right). He put up an argument, went back and took measurements again before repeating it was good. I flipped the screen around and showed him where it was obviously not even. A third set of meter readings and he starts adjust lights. Once the footage was in post, the fx team commented about how easy the keys were because of the even lighting.The problem is that the vast majority of people on set have no clue what is going on in post. To the point, when the budget is big enough, a post supervisor is present on production days to give input so "fixing it in post" is minimized. When there is no budget, you'll see situations just like in the first 30 seconds of TFA's video. A single lamp lighting the background so you can easily see the light falling off and the shadows from wrinkles where the screen was just pulled out of the bag 10 minutes before shooting. People just don't realize how much light a green screen takes. They also fail to have enough space so they can pull the talent far enough off the wall to avoid the green reflecting back onto the talent's skin.TL;DR They solved something to make post less expensive because they cut corners during production.
  • IshKebab
    Pretty impressive results! Seems like someone has even made a GUI for it: https://github.com/edenaion/EZ-CorridorKeyStill Python unfortunately.
  • MrVitaliy
    Anyone tried using lidar and just cut/measure distance to the object?
  • amelius
    There's still a bug: the glass with water does not distort the checker pattern in the background at 24:12.
  • amelius
    Is it a coincidence that the result is stable between subsequent frames?
  • superjan
    Watched this a few days ago. The video is light on technical details, except maybe that they used CGI to generate training data.
  • ralusek
    I'm a software engineer that, like the vast majority of you, uses AI/agents in my workflow every day. That being said, I have to admit that it feels a little weird to hear someone who does not write code say that they built something, without even mentioning that they had an agent build it (unless I missed that).
  • Computer0
    Looking forward to trying it out, 8gb of vram or unified memory required!
  • anon
    undefined