One reason I do kind of "downcasting" in C# is to simulate the idea of sum types, or discriminates unions which goes well with the new pattern matching. You have to be careful though, and ensure your handlers throw exceptions for unhandled types (when somebody adds a new subtype for example). Support is not perfect in c# but it helps for modelling domains and F# interop
Joshua talks about this in his book *Refactoring to Patterns* in the chapter about Visitor pattern and heterogeneous collections. He says: > Type-casting objects to access their specific interfaces is acceptable if it’s not done frequently. However, if this activity becomes frequent, it’s worth consider- ing a better design. So I understood it that there is a case when the visitor pattern is to complex for the problem and if down casting is done infrequent and isolated. - Move Accumulation to Visitor p. 320 - Joshua, K. *Refactoring To Patterns*
I see down casting more as a "try not to do this" rather than a "never do this". From a theoretical stance, I get your point. But when finances become involved, sometimes it's just cheaper and quicker to down cast than create the supporting code to pass the appropriate objects around. You're not usually given the luxury of designing the whole system, and you're often forced to make sloppy solutions like this, or waste valuable weeks of time trying to fix the architecture when you don't often have budget for that. I had a similar argument for your topics on singletons; I see why you want to avoid them where possible, but would never make it an absolute rule.
"NEVER!" is a strong word. How about serialization? Reflection? Passing objects in and out of a legacy library? What if not using downcasting would result in an unreasonable amount of additions to the type space, making it much less comprehensible and difficult to maintain? If I know you, you must have thought of these, so I'm betting you're being deliberately provocative in your titles :D Now that's a social media smell.
Görkem PAÇACI Ah, interesting counter points as usual. Thanks for watching! :) :) But please exemplify. Why would e.g. serialization necessitate downcasting?
Görkem PAÇACI pardon the multiple posts. But also, the word "unreasonable" implies pragmatically unreasonable. So that becomes a very subjective debate. It is possible that my pain point of "too many" types is comes later than e.g. yours. However I would have to see an example to agree that too many types is a problem. I mean too many types may possibly also indicate that one have got the "wrong" abstraction, rather than that one somehow should use downcasting. Metaphorically I'm thinking: when you're in a hole, stop digging. I.e. If you've got an abstraction problem, don't make it worse by coupling to concretions (downcasting). But yes, "never" is a strong word so I'm completely humble on this point and welcome being proven wrong :) :)
All abstractions leak. That's why sometimes it's acceptable to downcast, rather than to keep looking for that one perfect abstraction that never was. (see www.joelonsoftware.com/articles/LeakyAbstractions.html). I don't agree it's too subjective, it is pretty obvious when you see a kind of code where the programmer/designer were really pushing it too far just for the sake of abstraction. There are also the other examples I gave, serialization and reflection, which often require some sort of down-casting.
After our in-person discussion I know see what you mean. If you are writing a general serialization library that serializes an "unknown" object graph then deserialization will *necessarily* demand downcasting or something of the likes. The deserialization method will, if it's supposed to be able to deserialize any object graph representation, have to return something of a type that will need to be downcasted such as Object. I did not understand you referred to serialization of general graphs. Very interesting. I should have been more clear about that I'm arguing for "within" an application's boundaries. Lots of interesting (read: dangerous) things happen at the "edge" of our applications. I've e.g. got a new video coming up about how replacing conditionals with polymorphism becomes problematic when reaching the boundary (i.e. the primitive types) of the application. I'm loosely thinking along the following lines: If a problem necessitates a primitive datatype then it will not necessarily be susceptible to classic OO solutions. As soon as we have abstracted "away" the primitive then we are back in OOD land but before we have (at the "boundary" of the application) then we have not, and correspondingly our solutions will have to be way different. I find this problematic but my gut tells me that introducing things such as if-statements and downcasting treats the symptoms and not the disease. But perhaps I'm too naive :) Regarding Leaky abstractions: thanks, that was super interesting in itself. Thanks. I'll research it and hopefully make a video on just that :) :) I'm not however entirely convinced that it's a good way to think about whether downcasting is suitable within the application "boundaries". Most of Joel's examples regard problems at the "boundary" and it seems to me that abstractions within "what you control" and abstractions "crossing the boundary of what you control" may suffer fundamentally different challenges (as discussed above). But I need to do some more thinking on this.
Generics in Java, ArrayList implementation uses down casting of it's internal array from Object to generic type T. I would love to know any way of doing that without downcast in Java
I use downcasting to store data. I'm not sure what data I would be storing and it could change at runtime. If you have multiple decoupled system sending messages to each other, you may need to downcase to store the data so your only job is to manage the messaging system and ensure that actors are running. Let the actors worry about the correctness of the message
In my case, i had a robot which had to do basic operations with let's say a display. Now, since there will be 10000-15000 such robots and there are a lot of possible displays which can be used, i choose to downcast the interface to the mounted hardware. Thus it is much easier to test + the user will just attatch the available hardware and the program will inject the correct subclass.
Interesting idea. Honestly it's beyond my knowledge, but googling it seems that in Java it's actually the reverse. I.e. downcasting (negligibly) decreases performance (stackoverflow.com/questions/8803517/performance-of-object-typecasting) since a runtime check will have to be run to ensure that the downcast will "work".
Down-casting is discussed in the context of MS Certs but it is not supported by C#. Just tried on .NET Fiddle. Because it is not 100% full featured I double checked on an IDE. Trying to down-cast give compile error → Cannot convert source type 'Base' to target type 'Derived' You can fool the compiler by explicit cast but that's just pushing the problem at runtime. ↓ Unhandled exception. System.InvalidCastException: Unable to cast object of type 'Base' to type 'Derived'. Currently I'm just a junior dev profile so I can't tell much about why down-casting could be useful. That said it make little to no sense because it would mean down-casted type might have specialized methods that actually do not exist. Would it use the implementation from the derived class by default ? Anyway I heard of those concepts in the scope of MTA certs. Never heard of that earlier. Still good to have multiple sources to try understand the best possible way. Thanks for the video.
Interesting video, thank you! I've heard many times that downcasting is bad, but I am still unsure how to avoid it sometimes. Example: In a game, I have a player, items and traps. Items and traps are both entities, i.e. they both inherit from a class Entity. Now I check collision between the player and the entities: If the player collides with the Entity ent, I check the type of ent: If ent is of type Item, the player should collect the item, if ent is of type Trap, the player should die. This procedure seems totally wrong, I use type checking and down casting. But what is a good way to avoid this? Is there a pattern for this situation?
I'm still a bit of a beginner so please excuse me if my comment is off base, but in this example would the following work? Your collision check method either calls (e.g. "OnCollision()") or is a virtual method. In your Item class, OnCollision calls PickUp(), and in your Trap class, OnCollision calls Die(). So in a general sense, the way it works is that the player collides with an abstract Entity, which is responsible for detecting when the collision happens and triggering *something* to happen, and then the concrete Items and Traps specify what that something is.
@@willpetillo1189 came here to post exactly this. Set up an abstract function which you know exists 100%, and leave the exact behaviour of the function to whichever subclass the function is being called on.
If what they said isn't good enough, I recommend looking into double dispatch. So, in your case, when an Item collides with something, it dispatches back to the thing it collided with by calling collidedWithItem(this), and the Trap would call collidedWithTrap(this). It's not always the best solution, but if the types of Entities is a small enough list and it won't continue to grow too fast, it's handy.
If you design a system where a Cat is a subtype of Animal, even when no other subtype of Animal exists, it is more than reasonable to assume there will be some other subtype of Animal in the future. Otherwise, why create the super type in the first place?
I'm running into a down casting issue and I'm not sure what the best solution is. My problem is I have a DeckManager class that is only responsible for the draw and discard functions with a card class. Should the player contain the functionality of how a card type is played or should the card know how its played?
"I am shocked that downcasting exists" This was EXACTLY my response! How on Earth can such a concept be legal when it introduces so many obvious issues? Edit: just read some responses below. Dealing with legacy superclasses does seem like a legitimate use case. But I think there should be a less harmful solution.
The problem is inheritance. When you inherit from a class and extend some interface, you MUST (good or bad) downcast when you need to invoke methods of the class being extended. This could be "resolved" if a language (say Java) would make interface implementation and class extension mutually exclusive. That said, the language (say Java) gives you the option to do this, but it is certainly NOT a requirement. You can limit yourself and mutually exclude implementation and extension in your code. Then again, you run into the same sort of problem when dealing with inheritance.
Java/C#. Java's the language made by a procedural programmer who thought "wouldn't it be great if everything was an object?". A programmer who never worked with OO and then tried sticking something like it on C. A programmer who didn't even know what a byte is. There are plenty of good reasons why it lost to JavaScript despite coming on the browser scene a year in advance. Ah well. At least it's great in providing "garbage in, garbage out" examples. And legacy superclasses? Great stuff. Give the coders a broom and let them clean up. Never twist around a class hierarchy and turn it upside down. If you need serious changes, go back to the drawing board first.
The only "good" reason for downcasting that I know of is when you're trying to make a function for a superclass and it doesn't have a universal API that's usable, and you don't have control of that API. For instance, makeSound(animal: Animal). In this case, Animal doesn't actually provide a standard API for it to call, so it checks the types. If it's a Cat, call meow() on it. If it's a Dog, call bark() on it. Something like that.
Thanks for the video. Personally, I'm using downcasting only for deserialized objects (C#'s System.Object) and while working with some native Windows code. I don't see other ways for now how to avoid it, so this is the only case when downcasting is necessary(personal opinion). But yea, we should avoid this whenever possible
Не стал досматривать этот бред. Пахнет ему такой код, блин. А ничего, что мы добавляем новые методы и свойства и получить доступ к ним иногда получается только если сделать даункастинг? Потому что в суперкласс их нет.
I am guilty of type checking and downcasting. BUT, I understand why it is a code smell. The main reason is because it forces you to violate Open/Close Principle. Why? Because as you introduce new subtypes in your system, you are forced to keep adding new type checks in order for keeping your code from breaking.
@@Zxv975 That's way too hard and fast a rule. Interfaces aren't just about Polymorphism, the can also be about what functionality is exposed to an outside user. If I have a factory class that manufactures interface X and collaborator Y, the user who interacts with them need not have any knowledge of some of their internal state or methods, and I may want to even hide that in a class private to the factory and the implementations of the interfaces X and Y. But when Y collaborates with X, it may need to downcast it to access some implementation-related data that isn't exposed to consumers of the interface X. What the rule is getting at is that we can have a lot of child classes of X, but this is privileging the Polymorphism side of things and the way that downcasting complicates it or renders it useless (which I agree with). In reality, there are other reasons for using inheritance that have nothing to do with Polymorphism.
One reason I do kind of "downcasting" in C# is to simulate the idea of sum types, or discriminates unions which goes well with the new pattern matching. You have to be careful though, and ensure your handlers throw exceptions for unhandled types (when somebody adds a new subtype for example). Support is not perfect in c# but it helps for modelling domains and F# interop
Joshua talks about this in his book *Refactoring to Patterns* in the chapter about Visitor pattern and heterogeneous collections. He says:
> Type-casting objects to access their specific interfaces is acceptable if it’s not done frequently. However, if this activity becomes frequent, it’s worth consider- ing a better design.
So I understood it that there is a case when the visitor pattern is to complex for the problem and if down casting is done infrequent and isolated.
- Move Accumulation to Visitor p. 320 - Joshua, K. *Refactoring To Patterns*
I see down casting more as a "try not to do this" rather than a "never do this". From a theoretical stance, I get your point. But when finances become involved, sometimes it's just cheaper and quicker to down cast than create the supporting code to pass the appropriate objects around. You're not usually given the luxury of designing the whole system, and you're often forced to make sloppy solutions like this, or waste valuable weeks of time trying to fix the architecture when you don't often have budget for that.
I had a similar argument for your topics on singletons; I see why you want to avoid them where possible, but would never make it an absolute rule.
"NEVER!" is a strong word. How about serialization? Reflection? Passing objects in and out of a legacy library? What if not using downcasting would result in an unreasonable amount of additions to the type space, making it much less comprehensible and difficult to maintain? If I know you, you must have thought of these, so I'm betting you're being deliberately provocative in your titles :D Now that's a social media smell.
Görkem PAÇACI Ah, interesting counter points as usual. Thanks for watching! :) :) But please exemplify. Why would e.g. serialization necessitate downcasting?
Görkem PAÇACI Also... about the social media smell you are completely on point sir. I only crush code smells ;) :)
Görkem PAÇACI pardon the multiple posts. But also, the word "unreasonable" implies pragmatically unreasonable. So that becomes a very subjective debate. It is possible that my pain point of "too many" types is comes later than e.g. yours. However I would have to see an example to agree that too many types is a problem. I mean too many types may possibly also indicate that one have got the "wrong" abstraction, rather than that one somehow should use downcasting. Metaphorically I'm thinking: when you're in a hole, stop digging. I.e. If you've got an abstraction problem, don't make it worse by coupling to concretions (downcasting).
But yes, "never" is a strong word so I'm completely humble on this point and welcome being proven wrong :) :)
All abstractions leak. That's why sometimes it's acceptable to downcast, rather than to keep looking for that one perfect abstraction that never was. (see www.joelonsoftware.com/articles/LeakyAbstractions.html). I don't agree it's too subjective, it is pretty obvious when you see a kind of code where the programmer/designer were really pushing it too far just for the sake of abstraction.
There are also the other examples I gave, serialization and reflection, which often require some sort of down-casting.
After our in-person discussion I know see what you mean. If you are writing a general serialization library that serializes an "unknown" object graph then deserialization will *necessarily* demand downcasting or something of the likes. The deserialization method will, if it's supposed to be able to deserialize any object graph representation, have to return something of a type that will need to be downcasted such as Object. I did not understand you referred to serialization of general graphs. Very interesting.
I should have been more clear about that I'm arguing for "within" an application's boundaries. Lots of interesting (read: dangerous) things happen at the "edge" of our applications. I've e.g. got a new video coming up about how replacing conditionals with polymorphism becomes problematic when reaching the boundary (i.e. the primitive types) of the application. I'm loosely thinking along the following lines: If a problem necessitates a primitive datatype then it will not necessarily be susceptible to classic OO solutions. As soon as we have abstracted "away" the primitive then we are back in OOD land but before we have (at the "boundary" of the application) then we have not, and correspondingly our solutions will have to be way different. I find this problematic but my gut tells me that introducing things such as if-statements and downcasting treats the symptoms and not the disease. But perhaps I'm too naive :)
Regarding Leaky abstractions: thanks, that was super interesting in itself. Thanks. I'll research it and hopefully make a video on just that :) :) I'm not however entirely convinced that it's a good way to think about whether downcasting is suitable within the application "boundaries". Most of Joel's examples regard problems at the "boundary" and it seems to me that abstractions within "what you control" and abstractions "crossing the boundary of what you control" may suffer fundamentally different challenges (as discussed above). But I need to do some more thinking on this.
Generics in Java, ArrayList implementation uses down casting of it's internal array from Object to generic type T. I would love to know any way of doing that without downcast in Java
I use downcasting to store data. I'm not sure what data I would be storing and it could change at runtime.
If you have multiple decoupled system sending messages to each other, you may need to downcase to store the data so your only job is to manage the messaging system and ensure that actors are running.
Let the actors worry about the correctness of the message
In my case, i had a robot which had to do basic operations with let's say a display. Now, since there will be 10000-15000 such robots and there are a lot of possible displays which can be used, i choose to downcast the interface to the mounted hardware.
Thus it is much easier to test + the user will just attatch the available hardware and the program will inject the correct subclass.
speculating here but can it sometimes be a memory or speed advanage not to include the whole chain of inheretage?
Interesting idea. Honestly it's beyond my knowledge, but googling it seems that in Java it's actually the reverse. I.e. downcasting (negligibly) decreases performance (stackoverflow.com/questions/8803517/performance-of-object-typecasting) since a runtime check will have to be run to ensure that the downcast will "work".
Ps. thanks for watching and engaging :) :)
Thanks for interesting programming talks.
Down-casting is discussed in the context of MS Certs but it is not supported by C#.
Just tried on .NET Fiddle. Because it is not 100% full featured I double checked on an IDE.
Trying to down-cast give compile error → Cannot convert source type 'Base' to target type 'Derived'
You can fool the compiler by explicit cast but that's just pushing the problem at runtime.
↓
Unhandled exception. System.InvalidCastException: Unable to cast object of type 'Base' to type 'Derived'.
Currently I'm just a junior dev profile so I can't tell much about why down-casting could be useful. That said it make little to no sense because it would mean down-casted type might have specialized methods that actually do not exist. Would it use the implementation from the derived class by default ?
Anyway I heard of those concepts in the scope of MTA certs. Never heard of that earlier.
Still good to have multiple sources to try understand the best possible way.
Thanks for the video.
Interesting video, thank you! I've heard many times that downcasting is bad, but I am still unsure how to avoid it sometimes.
Example: In a game, I have a player, items and traps. Items and traps are both entities, i.e. they both inherit from a class Entity. Now I check collision between the player and the entities: If the player collides with the Entity ent, I check the type of ent: If ent is of type Item, the player should collect the item, if ent is of type Trap, the player should die.
This procedure seems totally wrong, I use type checking and down casting. But what is a good way to avoid this? Is there a pattern for this situation?
I'm still a bit of a beginner so please excuse me if my comment is off base, but in this example would the following work? Your collision check method either calls (e.g. "OnCollision()") or is a virtual method. In your Item class, OnCollision calls PickUp(), and in your Trap class, OnCollision calls Die(). So in a general sense, the way it works is that the player collides with an abstract Entity, which is responsible for detecting when the collision happens and triggering *something* to happen, and then the concrete Items and Traps specify what that something is.
@@willpetillo1189 came here to post exactly this. Set up an abstract function which you know exists 100%, and leave the exact behaviour of the function to whichever subclass the function is being called on.
If what they said isn't good enough, I recommend looking into double dispatch. So, in your case, when an Item collides with something, it dispatches back to the thing it collided with by calling collidedWithItem(this), and the Trap would call collidedWithTrap(this).
It's not always the best solution, but if the types of Entities is a small enough list and it won't continue to grow too fast, it's handy.
If you design a system where a Cat is a subtype of Animal, even when no other subtype of Animal exists, it is more than reasonable to assume there will be some other subtype of Animal in the future. Otherwise, why create the super type in the first place?
I'm running into a down casting issue and I'm not sure what the best solution is. My problem is I have a DeckManager class that is only responsible for the draw and discard functions with a card class. Should the player contain the functionality of how a card type is played or should the card know how its played?
"I am shocked that downcasting exists"
This was EXACTLY my response! How on Earth can such a concept be legal when it introduces so many obvious issues?
Edit: just read some responses below. Dealing with legacy superclasses does seem like a legitimate use case. But I think there should be a less harmful solution.
The problem is inheritance. When you inherit from a class and extend some interface, you MUST (good or bad) downcast when you need to invoke methods of the class being extended.
This could be "resolved" if a language (say Java) would make interface implementation and class extension mutually exclusive.
That said, the language (say Java) gives you the option to do this, but it is certainly NOT a requirement. You can limit yourself and mutually exclude implementation and extension in your code. Then again, you run into the same sort of problem when dealing with inheritance.
Java/C#. Java's the language made by a procedural programmer who thought "wouldn't it be great if everything was an object?". A programmer who never worked with OO and then tried sticking something like it on C. A programmer who didn't even know what a byte is. There are plenty of good reasons why it lost to JavaScript despite coming on the browser scene a year in advance. Ah well. At least it's great in providing "garbage in, garbage out" examples.
And legacy superclasses? Great stuff. Give the coders a broom and let them clean up. Never twist around a class hierarchy and turn it upside down. If you need serious changes, go back to the drawing board first.
The only "good" reason for downcasting that I know of is when you're trying to make a function for a superclass and it doesn't have a universal API that's usable, and you don't have control of that API.
For instance, makeSound(animal: Animal). In this case, Animal doesn't actually provide a standard API for it to call, so it checks the types. If it's a Cat, call meow() on it. If it's a Dog, call bark() on it. Something like that.
Thanks for the video. Personally, I'm using downcasting only for deserialized objects (C#'s System.Object) and while working with some native Windows code. I don't see other ways for now how to avoid it, so this is the only case when downcasting is necessary(personal opinion). But yea, we should avoid this whenever possible
Awesome vid
Не стал досматривать этот бред. Пахнет ему такой код, блин. А ничего, что мы добавляем новые методы и свойства и получить доступ к ним иногда получается только если сделать даункастинг? Потому что в суперкласс их нет.
Downcasting is needed when deserializing in .NET (i.e. XmlSerializer.Deserialize).
The Best !
Use instanceOf check before downcasting. Thats all. No need to label downcasting as a code smell
He mentions in the video that type checking is also a code smell and there's a video on that. I haven't watched it yet, but I'm inclined to agree.
I am guilty of type checking and downcasting. BUT, I understand why it is a code smell. The main reason is because it forces you to violate Open/Close Principle. Why? Because as you introduce new subtypes in your system, you are forced to keep adding new type checks in order for keeping your code from breaking.
@@Zxv975 That's way too hard and fast a rule.
Interfaces aren't just about Polymorphism, the can also be about what functionality is exposed to an outside user. If I have a factory class that manufactures interface X and collaborator Y, the user who interacts with them need not have any knowledge of some of their internal state or methods, and I may want to even hide that in a class private to the factory and the implementations of the interfaces X and Y. But when Y collaborates with X, it may need to downcast it to access some implementation-related data that isn't exposed to consumers of the interface X.
What the rule is getting at is that we can have a lot of child classes of X, but this is privileging the Polymorphism side of things and the way that downcasting complicates it or renders it useless (which I agree with). In reality, there are other reasons for using inheritance that have nothing to do with Polymorphism.