New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Champion "Nullable reference types" #36

Open
MadsTorgersen opened this Issue Feb 9, 2017 · 221 comments

Comments

Projects
None yet
@MadsTorgersen
Contributor

MadsTorgersen commented Feb 9, 2017

LDM:

@gulshan

This comment has been minimized.

Show comment
Hide comment
@gulshan

gulshan Feb 12, 2017

Will default constructor also generate warnings for all non-initialized reference type members?

gulshan commented Feb 12, 2017

Will default constructor also generate warnings for all non-initialized reference type members?

@eyalsk

This comment has been minimized.

Show comment
Hide comment
@eyalsk

eyalsk Feb 13, 2017

Contributor

In the following case:

var y = b ? "Hello" : null; // string? or error
var z = b ? 7 : null; // Error today, could be int?

I think that in both cases it should be <type>?; more and more people are using var in their codebase and switching to use a type just to evaluate an expression seems odd, however, in the case you really want it to be an error, using a type actually make more sense.

Contributor

eyalsk commented Feb 13, 2017

In the following case:

var y = b ? "Hello" : null; // string? or error
var z = b ? 7 : null; // Error today, could be int?

I think that in both cases it should be <type>?; more and more people are using var in their codebase and switching to use a type just to evaluate an expression seems odd, however, in the case you really want it to be an error, using a type actually make more sense.

@yaakov-h

This comment has been minimized.

Show comment
Hide comment
@yaakov-h

yaakov-h Feb 24, 2017

Contributor

Nullability adornments should be represented in metadata as attributes. This means that downlevel compilers will ignore them.

Can we have a serious discussion about using modopts (or even modreqs) to allow for nullability overloads, i.e. two methods who only differ by nullability?

This would be useful for functions where null input = null output, and non-null input = non-null output, or even vice-versa.

Contributor

yaakov-h commented Feb 24, 2017

Nullability adornments should be represented in metadata as attributes. This means that downlevel compilers will ignore them.

Can we have a serious discussion about using modopts (or even modreqs) to allow for nullability overloads, i.e. two methods who only differ by nullability?

This would be useful for functions where null input = null output, and non-null input = non-null output, or even vice-versa.

@HaloFour

This comment has been minimized.

Show comment
Hide comment
@HaloFour

HaloFour Feb 24, 2017

Contributor

@yaakov-h

I believe that conversation has happened a couple of times already. modreq is not-CLS. modopt does allow for overloads but requires specific understanding on the part of all consuming compilers since the modifier must be at the very least copied into the call signature. Both break compatibility with existing method signatures. For something that will hopefully proliferate across the entire BCL very quickly using modopt would create a big hurdle to doing that.

Contributor

HaloFour commented Feb 24, 2017

@yaakov-h

I believe that conversation has happened a couple of times already. modreq is not-CLS. modopt does allow for overloads but requires specific understanding on the part of all consuming compilers since the modifier must be at the very least copied into the call signature. Both break compatibility with existing method signatures. For something that will hopefully proliferate across the entire BCL very quickly using modopt would create a big hurdle to doing that.

@gulshan

This comment has been minimized.

Show comment
Hide comment
@gulshan

gulshan Mar 9, 2017

I think this feature is about objects and not types as it does not change anything about the actual type system. When nullability was introduced for value types, it was about the types. But that's not the case here. Quoting the proposal (emphasis mine)-

This feature is intended to:

  • Allow developers to express whether a variable, parameter or result of a reference type is intended to be null or not.
  • Provide optional warnings when such variables, parameters and results are not used according to their intent.

I think the "variables, parameters and results" (are the members of a class being excluded here?) can be easily replaced by "objects". The proposal is introducing non-nullable objects(even including value types) and putting them to be default with separate notation for nullable objects. So, the proposal can be renamed to-
"Default non-nullable objects" or something like that IMHO.

gulshan commented Mar 9, 2017

I think this feature is about objects and not types as it does not change anything about the actual type system. When nullability was introduced for value types, it was about the types. But that's not the case here. Quoting the proposal (emphasis mine)-

This feature is intended to:

  • Allow developers to express whether a variable, parameter or result of a reference type is intended to be null or not.
  • Provide optional warnings when such variables, parameters and results are not used according to their intent.

I think the "variables, parameters and results" (are the members of a class being excluded here?) can be easily replaced by "objects". The proposal is introducing non-nullable objects(even including value types) and putting them to be default with separate notation for nullable objects. So, the proposal can be renamed to-
"Default non-nullable objects" or something like that IMHO.

@gafter

This comment has been minimized.

Show comment
Hide comment
@gafter

gafter Mar 9, 2017

Member

@gulshan Objects exist at runtime. Types exist at compile-time. This is about changing what happens at compile-time, not at runtime.

Member

gafter commented Mar 9, 2017

@gulshan Objects exist at runtime. Types exist at compile-time. This is about changing what happens at compile-time, not at runtime.

@gulshan

This comment has been minimized.

Show comment
Hide comment
@gulshan

gulshan Mar 10, 2017

@gafter I have two questions-

  • Would typeof return same type for nullable and non-nullable references?
  • And, as default returns null for classes, will FooClass foo = default(FooClass) generate warnings?

For nullable value types, both the answers are "no". But, I guess for nullable references, the answer is "yes". Because there is nothing changed about types. Only thing changed is, whether a reference should hold null or not. If I am wrong here, please correct me.

BTW, now I propose the title "Default to non-nullable references" or simply "Non-nullable references".

gulshan commented Mar 10, 2017

@gafter I have two questions-

  • Would typeof return same type for nullable and non-nullable references?
  • And, as default returns null for classes, will FooClass foo = default(FooClass) generate warnings?

For nullable value types, both the answers are "no". But, I guess for nullable references, the answer is "yes". Because there is nothing changed about types. Only thing changed is, whether a reference should hold null or not. If I am wrong here, please correct me.

BTW, now I propose the title "Default to non-nullable references" or simply "Non-nullable references".

@gafter

This comment has been minimized.

Show comment
Hide comment
@gafter

gafter Mar 10, 2017

Member

@gulshan Yes, and yes. The first because System.Type objects only exist at runtime. The second because the compiler knows that default(SomeReferenceType) is the constant null.

Member

gafter commented Mar 10, 2017

@gulshan Yes, and yes. The first because System.Type objects only exist at runtime. The second because the compiler knows that default(SomeReferenceType) is the constant null.

@Thaina

This comment has been minimized.

Show comment
Hide comment
@Thaina

Thaina Mar 14, 2017

Forgot to ask that. Is this feature would allow generic nullable type?

I mean code like this should be able to compile with this feature enabled?

public T? GET<T>(string obj)
{
    try { return HttpGet(url).Convert<T>(); }
    catch { return null; }
}

int n = GET<int>(url0) ?? -1;
var m = GET<MyCustomClass>(url1);

Thaina commented Mar 14, 2017

Forgot to ask that. Is this feature would allow generic nullable type?

I mean code like this should be able to compile with this feature enabled?

public T? GET<T>(string obj)
{
    try { return HttpGet(url).Convert<T>(); }
    catch { return null; }
}

int n = GET<int>(url0) ?? -1;
var m = GET<MyCustomClass>(url1);
@gafter

This comment has been minimized.

Show comment
Hide comment
@gafter

gafter Mar 14, 2017

Member

@Thaina Not as currently envisioned, no. There is no way to translate that into something expressible in the CLR (except by duplicating the method for each "nullable" unconstrained type parameter).

Member

gafter commented Mar 14, 2017

@Thaina Not as currently envisioned, no. There is no way to translate that into something expressible in the CLR (except by duplicating the method for each "nullable" unconstrained type parameter).

@orthoxerox

This comment has been minimized.

Show comment
Hide comment
@orthoxerox

orthoxerox Mar 14, 2017

@gafter how would duplicating work if C# can't support methods that have mutually exclusive constraint flags (gpReferenceTypeConstraint and gpNotNullableValueTypeConstraint) but have otherwise identical signatures from the viewpoint of overload resolution?

orthoxerox commented Mar 14, 2017

@gafter how would duplicating work if C# can't support methods that have mutually exclusive constraint flags (gpReferenceTypeConstraint and gpNotNullableValueTypeConstraint) but have otherwise identical signatures from the viewpoint of overload resolution?

@yaakov-h

This comment has been minimized.

Show comment
Hide comment
@yaakov-h

yaakov-h Mar 15, 2017

Contributor

The proposal doesn't take into considerations any of the discussion from the roslyn repo about telling the compiler "I know at this point in time that the value is not null. Do not warn me.", for example using a ! postfix operator.

Can we take that into consideration, or will #pragma warning disable be the only way around false positives?

Contributor

yaakov-h commented Mar 15, 2017

The proposal doesn't take into considerations any of the discussion from the roslyn repo about telling the compiler "I know at this point in time that the value is not null. Do not warn me.", for example using a ! postfix operator.

Can we take that into consideration, or will #pragma warning disable be the only way around false positives?

@Richiban

This comment has been minimized.

Show comment
Hide comment
@Richiban

Richiban Mar 27, 2017

@gafter In @Thaina 's example, would it be appropriate to lift the return into a Nullable type at that point? I'm not sure of the feasibility of a compiler feature that sometimes works with Nullable<T> and sometimes works with plain T and hides the difference between the two to the compiler.

I was thinking that the original example:

	public T? GET<T>(string obj)
	{
	    try { return HttpGet(url).Convert<T>(); }
	    catch { return null; }
	}

would like this if you decompiled it:

	public Nullable<T> GET<T>(string obj)
	{
	    try { return HttpGet(url).Convert<T>(); }
	    catch { return null; }
	}

But in C# vNext you wouldn't know. The type just appears to be T? still.

Richiban commented Mar 27, 2017

@gafter In @Thaina 's example, would it be appropriate to lift the return into a Nullable type at that point? I'm not sure of the feasibility of a compiler feature that sometimes works with Nullable<T> and sometimes works with plain T and hides the difference between the two to the compiler.

I was thinking that the original example:

	public T? GET<T>(string obj)
	{
	    try { return HttpGet(url).Convert<T>(); }
	    catch { return null; }
	}

would like this if you decompiled it:

	public Nullable<T> GET<T>(string obj)
	{
	    try { return HttpGet(url).Convert<T>(); }
	    catch { return null; }
	}

But in C# vNext you wouldn't know. The type just appears to be T? still.

@MikeyBurkman

This comment has been minimized.

Show comment
Hide comment
@MikeyBurkman

MikeyBurkman Mar 27, 2017

I'm still a bit curious how this could possibly be implemented with var without possibly breaking a lot of existing code:

var s = "abc";
...
s = null;

This is perfectly valid currently. However, what type should s be inferred to? If it's inferred to just String, then it'll break existing valid code. If it's inferred to String?, then it'll essentially force devs to make a choice between type inference and strict nullability checks.

MikeyBurkman commented Mar 27, 2017

I'm still a bit curious how this could possibly be implemented with var without possibly breaking a lot of existing code:

var s = "abc";
...
s = null;

This is perfectly valid currently. However, what type should s be inferred to? If it's inferred to just String, then it'll break existing valid code. If it's inferred to String?, then it'll essentially force devs to make a choice between type inference and strict nullability checks.

@Pzixel

This comment has been minimized.

Show comment
Hide comment
@Pzixel

Pzixel Mar 27, 2017

@MikeyBurkman it's may be inferred in different ways based on project checkbox (like it's done for unsafe)

Pzixel commented Mar 27, 2017

@MikeyBurkman it's may be inferred in different ways based on project checkbox (like it's done for unsafe)

@mlidbom

This comment has been minimized.

Show comment
Hide comment
@mlidbom

mlidbom Mar 27, 2017

To my mind the most important issue, by far, to fix is the current need to manually validate that every single reference type parameter is not null at runtime. This latest proposal seems to be de-emphasize that to the point where it is just mentioned as a possible "tweak" at the end. The way I understand it you would get warnings but zero guarantees.

I believe I am far from alone in feeling that without fixing that this feature would be of limited value. Here is a vote for ensuring that runtime not-null guarantees are prioritized. It would probably remove something like 90% of the current parameter validation code that that C# developers write.

I would suggest going a step further than the "!" syntax and supplying a compiler switch that would automatically generate such checks for ALL reference type parameters not explicitly allowed to be null by marking them with "?". This would be a huge time saver and if implemented in the compiler it should be possible to implement optimizations that would make the cost negligible.

I would also suggest extending the "!" syntax to check for default(TValue) in struct parameters. So that you could, as one example, validate that a Guid is not Guid.Empty with just a single "!" character.

mlidbom commented Mar 27, 2017

To my mind the most important issue, by far, to fix is the current need to manually validate that every single reference type parameter is not null at runtime. This latest proposal seems to be de-emphasize that to the point where it is just mentioned as a possible "tweak" at the end. The way I understand it you would get warnings but zero guarantees.

I believe I am far from alone in feeling that without fixing that this feature would be of limited value. Here is a vote for ensuring that runtime not-null guarantees are prioritized. It would probably remove something like 90% of the current parameter validation code that that C# developers write.

I would suggest going a step further than the "!" syntax and supplying a compiler switch that would automatically generate such checks for ALL reference type parameters not explicitly allowed to be null by marking them with "?". This would be a huge time saver and if implemented in the compiler it should be possible to implement optimizations that would make the cost negligible.

I would also suggest extending the "!" syntax to check for default(TValue) in struct parameters. So that you could, as one example, validate that a Guid is not Guid.Empty with just a single "!" character.

@mattwar

This comment has been minimized.

Show comment
Hide comment
@mattwar

mattwar Mar 27, 2017

@MikeyBurkman the var can infer the type to be string? with a known non-null value. This means the variable will pass all the tests and be able to be used as not-null until it is changed to the null value or a value not known to be not null.

mattwar commented Mar 27, 2017

@MikeyBurkman the var can infer the type to be string? with a known non-null value. This means the variable will pass all the tests and be able to be used as not-null until it is changed to the null value or a value not known to be not null.

@mattwar

This comment has been minimized.

Show comment
Hide comment
@mattwar

mattwar Mar 27, 2017

@Richiban you could only write that if T was either constrained to class or struct.

mattwar commented Mar 27, 2017

@Richiban you could only write that if T was either constrained to class or struct.

@MikeyBurkman

This comment has been minimized.

Show comment
Hide comment
@MikeyBurkman

MikeyBurkman Mar 27, 2017

@mattwar The issue with inferring it to the nullable value isn't that it's necessarily wrong, it's that it's extremely inconvenient, to the point that it becomes a nuisance. I don't recall seeing any suggestion about flow typing (where a variable's type can be narrowed by, for instance, a non-null assertion), so in order to pass that variable to a function not expecting a null, I'd either have to assign it some default value (foo(s || "")) which would be quite redundant in most cases, or I'd have to just assign it the non-null type explicitly, making var useless.

MikeyBurkman commented Mar 27, 2017

@mattwar The issue with inferring it to the nullable value isn't that it's necessarily wrong, it's that it's extremely inconvenient, to the point that it becomes a nuisance. I don't recall seeing any suggestion about flow typing (where a variable's type can be narrowed by, for instance, a non-null assertion), so in order to pass that variable to a function not expecting a null, I'd either have to assign it some default value (foo(s || "")) which would be quite redundant in most cases, or I'd have to just assign it the non-null type explicitly, making var useless.

@jnm2

This comment has been minimized.

Show comment
Hide comment
@jnm2

jnm2 Mar 31, 2018

Contributor

The billion-dollar mistake was not null in and of itself; it was the fact that type systems implicitly allow ReferenceType | null every time you use ReferenceType. Considering null to fulfill the contract ICanPerformX is just as silly as pretending that ICanPerformY fulfills the contract ICanPerformX, yet the type system does this implicitly. (The same conceptually applies to pointers as it does to references.)

If null was its own type as it is in TypeScript, and nothing was implicitly | null but rather you had to write ICanPerformX | null just like you'd write ICanPerformX | ICanPerformY, then we'd have all of the upsides of null and none of the billion-dollar downsides. This proposal gets us closer to that world.

Contributor

jnm2 commented Mar 31, 2018

The billion-dollar mistake was not null in and of itself; it was the fact that type systems implicitly allow ReferenceType | null every time you use ReferenceType. Considering null to fulfill the contract ICanPerformX is just as silly as pretending that ICanPerformY fulfills the contract ICanPerformX, yet the type system does this implicitly. (The same conceptually applies to pointers as it does to references.)

If null was its own type as it is in TypeScript, and nothing was implicitly | null but rather you had to write ICanPerformX | null just like you'd write ICanPerformX | ICanPerformY, then we'd have all of the upsides of null and none of the billion-dollar downsides. This proposal gets us closer to that world.

@hades-incarnate

This comment has been minimized.

Show comment
Hide comment
@hades-incarnate

hades-incarnate Apr 2, 2018

You are continually failing to realize the issue here. The issue is not whether or not the language needs a null ref type checking (its a really nice thing to have), or whether or not I am personally lazy to do it (I work for a salary), or whether optin/out is an assembly flag or command line switch (it really amounts to the same thing, as something that alters compiler state) its about the fact that for the majority of large codebase maintainers, corporate especially, the opportunity and existence of rationale for incurred costs to review the entire code base will happen exactly NEVER. You are leaving us behind. Make no mistake about that. The only way to bring us onboard is to have non-breaking incremental changes to the language specification. Hence, the ! suggestion. But then we come to the point that you don't want to put ! on nonnull positions for the sake of maintaining backward semantic compatibility (how inappropriate would it be now if I called you lazy?). So the battle seems to reside on the fact who would be forced to type an extra character and I feel that the large and complex corporate users are not properly represented in this design selection process :) .

And honestly, I don't care that Tony Hoare lived long enough to regret his design decisions, that does not make him holy or right. I believe null reference would exist with or without him, because it just makes sense in a lot of situations for noval/noop signalling, so someone else would have figured it out, its an idea whose inception was inevitable and as he himself put it "it was so easy to implement". Putting aside the obvious need for nullable strings coming from NOT NULL database sources, which are innumerable, passing null as interface, which you seem to believe is the most extreme abuse, also makes sense. For example, simplified:

void methodCall(IOptionalPerform x, ...)
{
   x?.Doit(); // or if (x != null) x.Doit(); as we have it in a lot of places
...
}

And we are supposed to do what here, create NullOptionalPerform class with empty Doit() and instantiate it every time we passed null in? Why? Because you forget to do a null check (here's that inappropriate lazy labeling again). In what universe do you believe this would happen? My boss would laugh in my face if I brought this to him asking to divert resources from ongoing projects to this frivolous pampering to coding purity. And I wouldn't blame him. What are we oing to do with single/double chained lists? have singleton .HEAD and .END instances and compare references? Come on? Extremism is never a good way to move anywhere.
C# is not a novelty language, it has history, it has mainstream use, its not academic, its a mason's tool. You have responsibility.

hades-incarnate commented Apr 2, 2018

You are continually failing to realize the issue here. The issue is not whether or not the language needs a null ref type checking (its a really nice thing to have), or whether or not I am personally lazy to do it (I work for a salary), or whether optin/out is an assembly flag or command line switch (it really amounts to the same thing, as something that alters compiler state) its about the fact that for the majority of large codebase maintainers, corporate especially, the opportunity and existence of rationale for incurred costs to review the entire code base will happen exactly NEVER. You are leaving us behind. Make no mistake about that. The only way to bring us onboard is to have non-breaking incremental changes to the language specification. Hence, the ! suggestion. But then we come to the point that you don't want to put ! on nonnull positions for the sake of maintaining backward semantic compatibility (how inappropriate would it be now if I called you lazy?). So the battle seems to reside on the fact who would be forced to type an extra character and I feel that the large and complex corporate users are not properly represented in this design selection process :) .

And honestly, I don't care that Tony Hoare lived long enough to regret his design decisions, that does not make him holy or right. I believe null reference would exist with or without him, because it just makes sense in a lot of situations for noval/noop signalling, so someone else would have figured it out, its an idea whose inception was inevitable and as he himself put it "it was so easy to implement". Putting aside the obvious need for nullable strings coming from NOT NULL database sources, which are innumerable, passing null as interface, which you seem to believe is the most extreme abuse, also makes sense. For example, simplified:

void methodCall(IOptionalPerform x, ...)
{
   x?.Doit(); // or if (x != null) x.Doit(); as we have it in a lot of places
...
}

And we are supposed to do what here, create NullOptionalPerform class with empty Doit() and instantiate it every time we passed null in? Why? Because you forget to do a null check (here's that inappropriate lazy labeling again). In what universe do you believe this would happen? My boss would laugh in my face if I brought this to him asking to divert resources from ongoing projects to this frivolous pampering to coding purity. And I wouldn't blame him. What are we oing to do with single/double chained lists? have singleton .HEAD and .END instances and compare references? Come on? Extremism is never a good way to move anywhere.
C# is not a novelty language, it has history, it has mainstream use, its not academic, its a mason's tool. You have responsibility.

@orthoxerox

This comment has been minimized.

Show comment
Hide comment
@orthoxerox

orthoxerox Apr 2, 2018

@hades-incarnate

And we are supposed to do what here, create NullOptionalPerform class with empty Doit() and instantiate it every time we passed null in?

No, just add a single ?: IOptionalPerform? x

orthoxerox commented Apr 2, 2018

@hades-incarnate

And we are supposed to do what here, create NullOptionalPerform class with empty Doit() and instantiate it every time we passed null in?

No, just add a single ?: IOptionalPerform? x

@HaloFour

This comment has been minimized.

Show comment
Hide comment
@HaloFour

HaloFour Apr 2, 2018

Contributor

@hades-incarnate

The ! was the original suggestion for this feature. It was decided that while it does help with backwards compatibility it has two major disadvantages. First, it puts the onus on the vast majority of developers to have to annotate the vast majority of their codebase. An adornment that must be used more than 95% of the time is effectively noise. Second, it means that the adoption rate starts at 0% and requires every developers everywhere to even begin realizing any benefit for the feature.

If you think that the team is making this decision on a whim without taking into consideration that this will have on massive codebases owned by their numerous partners I'd suggest that you are quite mistaken.

Contributor

HaloFour commented Apr 2, 2018

@hades-incarnate

The ! was the original suggestion for this feature. It was decided that while it does help with backwards compatibility it has two major disadvantages. First, it puts the onus on the vast majority of developers to have to annotate the vast majority of their codebase. An adornment that must be used more than 95% of the time is effectively noise. Second, it means that the adoption rate starts at 0% and requires every developers everywhere to even begin realizing any benefit for the feature.

If you think that the team is making this decision on a whim without taking into consideration that this will have on massive codebases owned by their numerous partners I'd suggest that you are quite mistaken.

@hades-incarnate

This comment has been minimized.

Show comment
Hide comment
@hades-incarnate

hades-incarnate Apr 2, 2018

yes, exactly, adoption rate for any new feature SHOULD start at 0%. That's how its been for, like, forever. Why is that now a bad thing? This decision and the way its done should have been made 18 years ago when CLS was being made. Not today. What you say about placing the onus on developers to update old codebase is wishful thinking, that was not going to happen either way. They were not going to put ! in old code no more than they are going to put ? on old code. Nobody is going to want to pay for that. This feature will be rolled out incrementally, regardless of the choice. And where opt-out gets applied for whetever reason, from "nope, I am not dealing with these 1000+ warnings" to the "risk management group requires zero warnings", the ? won't be used at all. So, all the effort, as well as the opportunity to bring it in, is wasted, you are actually forcing people to abandon a good idea out of inconvenience. With ! you have no such consideration, just turn it on, existing code will be zero warnings, where you apply it it means you WANT the warnings. You are opting in and adopting the feature by chosing to use it at your own tempo. As it should be with new features.

I do not think that the team is acting on a whim, but I think they are being overly influenced by theoretical faction and general public democratization that has been plaguing Linux FOSS for decades. For goodness sake, people are throwing fingers up and down over a technical issue :) The mere fact that I stand alone against your opinion base (as evident by comments and reactions) tells me that massive codebase owners have very little influence, or for that matter presence here. I doubt a lot of them even know about this place in particular, the only reason I found this place is because I was more personally involved into some GitHub projects, and am a nerd :). As for partnership influence, khhhm, I am just going to keep my mouth shut.

No, just add a single ?: IOptionalPerform? x

On million of lines of code over hundreds of projects?

hades-incarnate commented Apr 2, 2018

yes, exactly, adoption rate for any new feature SHOULD start at 0%. That's how its been for, like, forever. Why is that now a bad thing? This decision and the way its done should have been made 18 years ago when CLS was being made. Not today. What you say about placing the onus on developers to update old codebase is wishful thinking, that was not going to happen either way. They were not going to put ! in old code no more than they are going to put ? on old code. Nobody is going to want to pay for that. This feature will be rolled out incrementally, regardless of the choice. And where opt-out gets applied for whetever reason, from "nope, I am not dealing with these 1000+ warnings" to the "risk management group requires zero warnings", the ? won't be used at all. So, all the effort, as well as the opportunity to bring it in, is wasted, you are actually forcing people to abandon a good idea out of inconvenience. With ! you have no such consideration, just turn it on, existing code will be zero warnings, where you apply it it means you WANT the warnings. You are opting in and adopting the feature by chosing to use it at your own tempo. As it should be with new features.

I do not think that the team is acting on a whim, but I think they are being overly influenced by theoretical faction and general public democratization that has been plaguing Linux FOSS for decades. For goodness sake, people are throwing fingers up and down over a technical issue :) The mere fact that I stand alone against your opinion base (as evident by comments and reactions) tells me that massive codebase owners have very little influence, or for that matter presence here. I doubt a lot of them even know about this place in particular, the only reason I found this place is because I was more personally involved into some GitHub projects, and am a nerd :). As for partnership influence, khhhm, I am just going to keep my mouth shut.

No, just add a single ?: IOptionalPerform? x

On million of lines of code over hundreds of projects?

@HaloFour

This comment has been minimized.

Show comment
Hide comment
@HaloFour

HaloFour Apr 2, 2018

Contributor

@hades-incarnate

adoption rate for any new feature SHOULD start at 0%. That's how its been for, like, forever. Why is that now a bad thing?

Because the feature is completely useless without adoption.

This decision and the way its done should have been made 18 years ago when CLS was being made. Not today.

Hindsight is 20/20. At the time the bigger problem wasn't that uninitialized pointers were null, it was that they were garbage that was indistinguishable from a legit value.

What you say about placing the onus on developers to update old codebase is wishful thinking, that was not going to happen either way. They were not going to put ! in old code no more than they are going to put ? on old code.

They're more likely to do it if there is less work involved. For the majority of developers this path is much less painful.

And, as mentioned, if you're adorning the vast majority of the cases, the adornment is just noise. Better to adorn the exception. In two language versions it'll make much more sense that way.

On million of lines of code over hundreds of projects?

Tooling will handle much of that. Flow analysis will as well.

I'll also mention that the team is currently evaluating prototypes of this feature trying to find out how to minimize false positives and to make the transition as painless as possible. They'd probably be very interested in more data.

Or simply don't turn it on. If your projects routinely use null as a valid value then this feature holds no benefit for you. Ignore that this proposal exists. Nobody is making you do anything.

Contributor

HaloFour commented Apr 2, 2018

@hades-incarnate

adoption rate for any new feature SHOULD start at 0%. That's how its been for, like, forever. Why is that now a bad thing?

Because the feature is completely useless without adoption.

This decision and the way its done should have been made 18 years ago when CLS was being made. Not today.

Hindsight is 20/20. At the time the bigger problem wasn't that uninitialized pointers were null, it was that they were garbage that was indistinguishable from a legit value.

What you say about placing the onus on developers to update old codebase is wishful thinking, that was not going to happen either way. They were not going to put ! in old code no more than they are going to put ? on old code.

They're more likely to do it if there is less work involved. For the majority of developers this path is much less painful.

And, as mentioned, if you're adorning the vast majority of the cases, the adornment is just noise. Better to adorn the exception. In two language versions it'll make much more sense that way.

On million of lines of code over hundreds of projects?

Tooling will handle much of that. Flow analysis will as well.

I'll also mention that the team is currently evaluating prototypes of this feature trying to find out how to minimize false positives and to make the transition as painless as possible. They'd probably be very interested in more data.

Or simply don't turn it on. If your projects routinely use null as a valid value then this feature holds no benefit for you. Ignore that this proposal exists. Nobody is making you do anything.

@dgreene1

This comment has been minimized.

Show comment
Hide comment
@dgreene1

dgreene1 Apr 2, 2018

I think the beauty of the decision the team landed on (after much deliberation) is based off of a simple question: What would you like the code to look like in 3 years. Either:
Option 1) Lots's of ! everywhere, like User! or
Option 2) A few ? in the few places where we want to allow the memory to have not been allocated.

@hades-incarnate, I understand your concern, however if you have the time to check out what Microsoft did with TypeScript's scrict-null compiler option, I think you'd really love it. When you live in a world where the compiler is helping to catch null-reference errors, you might find that you are willing "to abandon a good idea out of inconvenience" as you said. Because the convenience (i.e. bug finding) and clarity of the code (by communicating contracts more clearly) that this new functionality provides will help make future code so much more maintainable.

Side note: I can speak on behalf of multiple teams at my company that are utilizing the strict-null flag in TS and how it's really helped prevent a multitude of bugs. It's also helped speed up code reviews. Lastly, I should share that many team members who were previously resistant have found that the migration has been smoother than they expected. So if you think from a "post-migration" standpoint, there clear winner from above is option 2.

tl;dr: it's so nice to know if the variable coming through is "maybe null" versus "definitely not null." And since that's so nice, it's better to prefer that as the default coding practice, which therefore eliminates the thousands of !'s that we would have to do introduce if strict null was not the default.

dgreene1 commented Apr 2, 2018

I think the beauty of the decision the team landed on (after much deliberation) is based off of a simple question: What would you like the code to look like in 3 years. Either:
Option 1) Lots's of ! everywhere, like User! or
Option 2) A few ? in the few places where we want to allow the memory to have not been allocated.

@hades-incarnate, I understand your concern, however if you have the time to check out what Microsoft did with TypeScript's scrict-null compiler option, I think you'd really love it. When you live in a world where the compiler is helping to catch null-reference errors, you might find that you are willing "to abandon a good idea out of inconvenience" as you said. Because the convenience (i.e. bug finding) and clarity of the code (by communicating contracts more clearly) that this new functionality provides will help make future code so much more maintainable.

Side note: I can speak on behalf of multiple teams at my company that are utilizing the strict-null flag in TS and how it's really helped prevent a multitude of bugs. It's also helped speed up code reviews. Lastly, I should share that many team members who were previously resistant have found that the migration has been smoother than they expected. So if you think from a "post-migration" standpoint, there clear winner from above is option 2.

tl;dr: it's so nice to know if the variable coming through is "maybe null" versus "definitely not null." And since that's so nice, it's better to prefer that as the default coding practice, which therefore eliminates the thousands of !'s that we would have to do introduce if strict null was not the default.

@TylerBrinkley

This comment has been minimized.

Show comment
Hide comment
@TylerBrinkley

TylerBrinkley Apr 2, 2018

Or simply don't turn it on. If your projects routinely use null as a valid value then this feature holds no benefit for you. Ignore that this proposal exists. Nobody is making you do anything.

I think his issue is that he'd like to use it but can only opt-in on an assembly level and thus won't be able to use it due to the large amount of work to fully update an entire existing assembly. I suppose you might be able to enable the feature but suppress the warnings on an assembly level and then use a pragma to incrementally enable the warnings on a file by file basis.

TylerBrinkley commented Apr 2, 2018

Or simply don't turn it on. If your projects routinely use null as a valid value then this feature holds no benefit for you. Ignore that this proposal exists. Nobody is making you do anything.

I think his issue is that he'd like to use it but can only opt-in on an assembly level and thus won't be able to use it due to the large amount of work to fully update an entire existing assembly. I suppose you might be able to enable the feature but suppress the warnings on an assembly level and then use a pragma to incrementally enable the warnings on a file by file basis.

@CyrusNajmabadi

This comment has been minimized.

Show comment
Hide comment
@CyrusNajmabadi

CyrusNajmabadi Apr 2, 2018

I think his issue is that he'd like to use it but can only opt-in on an assembly level and thus won't be able to use it due to the large amount of work to fully update an entire existing assembly.

I believe hte onus is on the team to help ensure good tooling to make updating an entire assembly not hugely costly. Note: i would personally like initial tooling to come along with the feature, and then have hte tooling improve over time as well as we encounter good cases from customers where they could really use help porting their code over en masse.

I've personally offered my own time to help out here as i think this feature is great, but that it must be paired with great tools to make it not unreasonably costly for people to migrate to it.

CyrusNajmabadi commented Apr 2, 2018

I think his issue is that he'd like to use it but can only opt-in on an assembly level and thus won't be able to use it due to the large amount of work to fully update an entire existing assembly.

I believe hte onus is on the team to help ensure good tooling to make updating an entire assembly not hugely costly. Note: i would personally like initial tooling to come along with the feature, and then have hte tooling improve over time as well as we encounter good cases from customers where they could really use help porting their code over en masse.

I've personally offered my own time to help out here as i think this feature is great, but that it must be paired with great tools to make it not unreasonably costly for people to migrate to it.

@Richiban

This comment has been minimized.

Show comment
Hide comment
@Richiban

Richiban Apr 2, 2018

@hades-incarnate

large and complex corporate users are not properly represented in this design selection process

First of all, I don't think that's true at all; there are many others, just like yourself, that are active in the GitHub discussions despite being a member of a large corporation working with literally millions of lines of code.

The issue is not [...] whether optin/out is an assembly flag or command line switch. [...] You are leaving us behind.

Second of all I think you seem to have missed the point that the feature is optional. If you don't want to use null tracking then don't turn it on. Your old code written in C# <8

void methodCall(IOptionalPerform x, ...)
{
   x?.Doit();
...
}

will continue to work in C# 8 without warnings or errors. You will not be left behind; you will not have to rewrite any old code to upgrade your compiler to v8. You just won't get the benefit of null tracking.

I really think that the C# team can be trusted when it comes to backwards compatibility. You'd be amazed at how they bend over backwards to make sure that code written in C# 1.0 still compiles with v7 of the compiler (they care far more about it than anyone else, including me!).

Richiban commented Apr 2, 2018

@hades-incarnate

large and complex corporate users are not properly represented in this design selection process

First of all, I don't think that's true at all; there are many others, just like yourself, that are active in the GitHub discussions despite being a member of a large corporation working with literally millions of lines of code.

The issue is not [...] whether optin/out is an assembly flag or command line switch. [...] You are leaving us behind.

Second of all I think you seem to have missed the point that the feature is optional. If you don't want to use null tracking then don't turn it on. Your old code written in C# <8

void methodCall(IOptionalPerform x, ...)
{
   x?.Doit();
...
}

will continue to work in C# 8 without warnings or errors. You will not be left behind; you will not have to rewrite any old code to upgrade your compiler to v8. You just won't get the benefit of null tracking.

I really think that the C# team can be trusted when it comes to backwards compatibility. You'd be amazed at how they bend over backwards to make sure that code written in C# 1.0 still compiles with v7 of the compiler (they care far more about it than anyone else, including me!).

@yaakov-h

This comment has been minimized.

Show comment
Hide comment
@yaakov-h

yaakov-h Apr 3, 2018

Contributor

What you say about placing the onus on developers to update old codebase is wishful thinking, that was not going to happen either way. They were not going to put ! in old code no more than they are going to put ? on old code.

I call BS on this. I work on an extremely large .NET codebase - I believe one of the largest in the world.

A few years ago we started putting Code Contracts on all new projects and even began trying to retrofit existing ones with Contracts - primarily around nullability, but Contracts also provided other guarantees (e.g. int fields for port numbers are 0 < num < 65536, or that filesystem paths are not empty).

Code Contracts ended up being a disaster, but the static analysis has been highly valuable for new projects. I can count on one hand the number of NullReferenceExceptions we've seen in production, and half of them were due to bugs in the Code Contracts rewriter tool (😢).

We're currently looking at automatically migrating our Contracts into C# 8 nullability annotations (or lack thereof), and perhaps even resuming the retrofit.

Just because your project/team/company doesn't think this is useful does not indicate that all projects/teams/companies think the same way.

You don't have to use it, and if you do, I believe you will be able to selectively enable it on an assembly-by-assembly basis. (If not, I think my unit testing assemblies are in trouble.) The default assumption that your code is null-safe and any digression from this is a warning may be a bit of a bumpy start, but it's the right thing to do for the future of C#, and it's less painful than crashing at runtime.

Contributor

yaakov-h commented Apr 3, 2018

What you say about placing the onus on developers to update old codebase is wishful thinking, that was not going to happen either way. They were not going to put ! in old code no more than they are going to put ? on old code.

I call BS on this. I work on an extremely large .NET codebase - I believe one of the largest in the world.

A few years ago we started putting Code Contracts on all new projects and even began trying to retrofit existing ones with Contracts - primarily around nullability, but Contracts also provided other guarantees (e.g. int fields for port numbers are 0 < num < 65536, or that filesystem paths are not empty).

Code Contracts ended up being a disaster, but the static analysis has been highly valuable for new projects. I can count on one hand the number of NullReferenceExceptions we've seen in production, and half of them were due to bugs in the Code Contracts rewriter tool (😢).

We're currently looking at automatically migrating our Contracts into C# 8 nullability annotations (or lack thereof), and perhaps even resuming the retrofit.

Just because your project/team/company doesn't think this is useful does not indicate that all projects/teams/companies think the same way.

You don't have to use it, and if you do, I believe you will be able to selectively enable it on an assembly-by-assembly basis. (If not, I think my unit testing assemblies are in trouble.) The default assumption that your code is null-safe and any digression from this is a warning may be a bit of a bumpy start, but it's the right thing to do for the future of C#, and it's less painful than crashing at runtime.

@svick

This comment has been minimized.

Show comment
Hide comment
@svick

svick May 20, 2018

Contributor

From recent LDM notes:

We're a little concerned about int-returning comparers, since chasing in a compiler analysis whether their result is 0 is quite subtle. We're open to discussing it again.

Do I understand it correctly that the question is about code like the following?

void M(Foo? foo)
{
    if (Comparer<Foo?>.Default.Compare(foo, null) != 0)
    {
        // is foo considered nullable here?
    }
}

If so, then it doesn't really seem to be worth it to do anything special here to me. Is there a realistic example where this would be actually useful?

Contributor

svick commented May 20, 2018

From recent LDM notes:

We're a little concerned about int-returning comparers, since chasing in a compiler analysis whether their result is 0 is quite subtle. We're open to discussing it again.

Do I understand it correctly that the question is about code like the following?

void M(Foo? foo)
{
    if (Comparer<Foo?>.Default.Compare(foo, null) != 0)
    {
        // is foo considered nullable here?
    }
}

If so, then it doesn't really seem to be worth it to do anything special here to me. Is there a realistic example where this would be actually useful?

@KrzysztofBranicki

This comment has been minimized.

Show comment
Hide comment
@KrzysztofBranicki

KrzysztofBranicki Sep 19, 2018

Will it be possible to check whether given property is nullable reference type or non-nullable in runtime using e.g. reflection?
This would be very valuable for example in case of swagger/openAPI generation. Currently swagger generators are marking all the fields that are value types as required in swagger definition and all reference types as optional. Currently to make reference type appear as required to such generators you need to use RequiredAttribute. It would be nice to rely only on "nullable reference types" feature, otherwise you would need to express your intent in two ways which is not ideal.
This may be also interesting for asp team since they have swagger/openAPI generation on their roadmap.

KrzysztofBranicki commented Sep 19, 2018

Will it be possible to check whether given property is nullable reference type or non-nullable in runtime using e.g. reflection?
This would be very valuable for example in case of swagger/openAPI generation. Currently swagger generators are marking all the fields that are value types as required in swagger definition and all reference types as optional. Currently to make reference type appear as required to such generators you need to use RequiredAttribute. It would be nice to rely only on "nullable reference types" feature, otherwise you would need to express your intent in two ways which is not ideal.
This may be also interesting for asp team since they have swagger/openAPI generation on their roadmap.

@yaakov-h

This comment has been minimized.

Show comment
Hide comment
@yaakov-h

yaakov-h Sep 19, 2018

Contributor

I believe you should be able to check this fairly trivially by looking for attributes, since that's how the prototype currently encodes this information.

Though in theory the compiler could erase that information for members with private or internal accessibility.

Contributor

yaakov-h commented Sep 19, 2018

I believe you should be able to check this fairly trivially by looking for attributes, since that's how the prototype currently encodes this information.

Though in theory the compiler could erase that information for members with private or internal accessibility.

@KrzysztofBranicki

This comment has been minimized.

Show comment
Hide comment
@KrzysztofBranicki

KrzysztofBranicki Sep 19, 2018

@yaakov-h thanks for explanation, that works for me. In this particular case I would not worry about compiler not putting those attributes on members with private or internal accessibility because properties that you generate swagger definition for have at least public getter.

KrzysztofBranicki commented Sep 19, 2018

@yaakov-h thanks for explanation, that works for me. In this particular case I would not worry about compiler not putting those attributes on members with private or internal accessibility because properties that you generate swagger definition for have at least public getter.

@remag9330

This comment has been minimized.

Show comment
Hide comment
@remag9330

remag9330 Sep 21, 2018

I know this is the C# repository so this will probably be a long shot, but is there any word on whether or not this feature will also be coming to VB for those of us still using it? The previous issue about this in the Roslyn repo had it tagged as both a C# and a VB feature, but now it's in the C# repo and the preview does not support VB (last time I checked).

remag9330 commented Sep 21, 2018

I know this is the C# repository so this will probably be a long shot, but is there any word on whether or not this feature will also be coming to VB for those of us still using it? The previous issue about this in the Roslyn repo had it tagged as both a C# and a VB feature, but now it's in the C# repo and the preview does not support VB (last time I checked).

@PathogenDavid

This comment has been minimized.

Show comment
Hide comment
@PathogenDavid

PathogenDavid Sep 21, 2018

As far as I can find, there is no corresponding issue in the vblang repo and it is not listed as a feature for VB 16.0. So I would assume not, but @KathleenDollard would know for sure.

PathogenDavid commented Sep 21, 2018

As far as I can find, there is no corresponding issue in the vblang repo and it is not listed as a feature for VB 16.0. So I would assume not, but @KathleenDollard would know for sure.

@remag9330

This comment has been minimized.

Show comment
Hide comment
@remag9330

remag9330 Sep 21, 2018

Of course, after I ask the question I find an answer:

https://github.com/dotnet/vblang/blob/master/meetings/2017/vbldm-notes-2017.08.30.md
https://github.com/dotnet/vblang/blob/master/meetings/2018/vbldm-notes-2018.02.07.md

Looks like it's been discussed but they're waiting to see how well it's received in C# first before considering for VB.

remag9330 commented Sep 21, 2018

Of course, after I ask the question I find an answer:

https://github.com/dotnet/vblang/blob/master/meetings/2017/vbldm-notes-2017.08.30.md
https://github.com/dotnet/vblang/blob/master/meetings/2018/vbldm-notes-2018.02.07.md

Looks like it's been discussed but they're waiting to see how well it's received in C# first before considering for VB.

@petroemil

This comment has been minimized.

Show comment
Hide comment
@petroemil

petroemil Sep 25, 2018

I have a couple of questions that probably a lot of people asked before me, but this is a very long discussion and I gave up reading it all :)

Will string and string? look different when using reflection (for example: could a serializer throw an exception when it can't fill in a non-nullable reference typed property)?

Will I be able to see nullable and non-nullable properties/parameters/fields/etc. when consuming a 3rd party library that is using this feature? Also, will I see lots of nullable reference typed properties when consuming an older library?

petroemil commented Sep 25, 2018

I have a couple of questions that probably a lot of people asked before me, but this is a very long discussion and I gave up reading it all :)

Will string and string? look different when using reflection (for example: could a serializer throw an exception when it can't fill in a non-nullable reference typed property)?

Will I be able to see nullable and non-nullable properties/parameters/fields/etc. when consuming a 3rd party library that is using this feature? Also, will I see lots of nullable reference typed properties when consuming an older library?

@HaloFour

This comment has been minimized.

Show comment
Hide comment
@HaloFour

HaloFour Sep 25, 2018

Contributor

@petroemil

Will string and string? look different when using reflection

Any field, property, parameter, return value that is nullable will be adorned with System.Runtime.CompilerServices.NullableAttribute. Any non-adorned target will be considered non-nullable. This is toggled on or off at the module or type level (I believe) with a second attribute, System.Runtime.CompilerService.NonNullTypesAttribute.

Any serialization/deserialization library will have to be updated to respect this metadata.

Contributor

HaloFour commented Sep 25, 2018

@petroemil

Will string and string? look different when using reflection

Any field, property, parameter, return value that is nullable will be adorned with System.Runtime.CompilerServices.NullableAttribute. Any non-adorned target will be considered non-nullable. This is toggled on or off at the module or type level (I believe) with a second attribute, System.Runtime.CompilerService.NonNullTypesAttribute.

Any serialization/deserialization library will have to be updated to respect this metadata.

@GGG-KILLER GGG-KILLER referenced this issue Sep 30, 2018

Open

Champion "slicing" / Range #185

1 of 5 tasks complete
@jacekbe

This comment has been minimized.

Show comment
Hide comment
@jacekbe

jacekbe Oct 7, 2018

If I understand this feature correctly non-nullable reference type fields and properties are supposed to be initialized in constructor otherwise we'll get a warning. What should be done in case of classes that follow 2-step initialization scheme i.e. object creation followed by calling some Init() method on object instance?
I know it might sound like a design smell, but the fact is that such approach is sometimes used in existing codebases (e.g. object upon creation is passed to some "framework" that manages its lifetime - among other things "initializes" it).

What if some property is initialized in the Init() method and not on constructor? Am I forced to mark such property as nullable? That would mean that I'd have to use dammit operator on every reference to such property, right? Is there some other way? For example some attribute that would treat annotated method as extension of constructor with regard to nullability (that is property/field should be initialized either in constructor or in the Init() method).

jacekbe commented Oct 7, 2018

If I understand this feature correctly non-nullable reference type fields and properties are supposed to be initialized in constructor otherwise we'll get a warning. What should be done in case of classes that follow 2-step initialization scheme i.e. object creation followed by calling some Init() method on object instance?
I know it might sound like a design smell, but the fact is that such approach is sometimes used in existing codebases (e.g. object upon creation is passed to some "framework" that manages its lifetime - among other things "initializes" it).

What if some property is initialized in the Init() method and not on constructor? Am I forced to mark such property as nullable? That would mean that I'd have to use dammit operator on every reference to such property, right? Is there some other way? For example some attribute that would treat annotated method as extension of constructor with regard to nullability (that is property/field should be initialized either in constructor or in the Init() method).

@kburgoyne

This comment has been minimized.

Show comment
Hide comment
@kburgoyne

kburgoyne Oct 7, 2018

I have similar such classes where I used to use that pattern: construct then Init(). I've subsequently changed to using a static Create() which returns the class instance, encapsulating the constructor and Init() as private. I find this pattern to be more fool-proof since the caller just doesn't have the option of constructing and then forgetting to Init(). (I usually view myself as the future fool I'm trying to fool-proof against.) I think it's also better because the class encapsulates and hides all the responsibility for getting itself constructed and initialized.

Some of my classes may also have a CreateAsync(CancellationToken) instead or as an option in cases where the class construction involves, for example, reading a file or other operation which the caller may prefer to have be async.

Previously I've used two different approaches with the static Create() method. Construct an (partially) uninitialized instance, then call the Init() on instance. The other is to have Create() create all the property values first (for those not passed to Create), THEN call the constructor passing-in all the values.

This second pattern is probably what you want to consider. The caller can pass what it already knows to the static Create() method, and then Create() reads files or whatever to determine the remaining property values, and the Create() calls the (probably private) constructor passing in everything -- thus leaving no undefined properties.

Some .NET framework classes follow this pattern, but the "create" method will be some other name related to some action to be performed. I've done that as well on some of my classes where something more descriptive than "create" is the right name for the method that creates the class instance. For example, "SpecialTypeOfFile.LoadAsync(…)" where "SpecialTypeOfFile" is a class to encapsulate all the data read from a particular type of file. The "LoadAsync()" performs the same function as "CreateAsync()" in that it also instantiates the class, but the name used better describes the main part of the action to be performed.

kburgoyne commented Oct 7, 2018

I have similar such classes where I used to use that pattern: construct then Init(). I've subsequently changed to using a static Create() which returns the class instance, encapsulating the constructor and Init() as private. I find this pattern to be more fool-proof since the caller just doesn't have the option of constructing and then forgetting to Init(). (I usually view myself as the future fool I'm trying to fool-proof against.) I think it's also better because the class encapsulates and hides all the responsibility for getting itself constructed and initialized.

Some of my classes may also have a CreateAsync(CancellationToken) instead or as an option in cases where the class construction involves, for example, reading a file or other operation which the caller may prefer to have be async.

Previously I've used two different approaches with the static Create() method. Construct an (partially) uninitialized instance, then call the Init() on instance. The other is to have Create() create all the property values first (for those not passed to Create), THEN call the constructor passing-in all the values.

This second pattern is probably what you want to consider. The caller can pass what it already knows to the static Create() method, and then Create() reads files or whatever to determine the remaining property values, and the Create() calls the (probably private) constructor passing in everything -- thus leaving no undefined properties.

Some .NET framework classes follow this pattern, but the "create" method will be some other name related to some action to be performed. I've done that as well on some of my classes where something more descriptive than "create" is the right name for the method that creates the class instance. For example, "SpecialTypeOfFile.LoadAsync(…)" where "SpecialTypeOfFile" is a class to encapsulate all the data read from a particular type of file. The "LoadAsync()" performs the same function as "CreateAsync()" in that it also instantiates the class, but the name used better describes the main part of the action to be performed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment