Simulating INumeric with dynamic in C# 4.0
Luca -
☕ 2 min. read
When I wrote my Excel financial library I agonized over the decision of which numeric type to use to represent money. Logic would push me toward decimal, but common usage among financial library writers would push me toward double. I ended up picking double, but I regret having to make that choice in the first place.
Conceptually, I’d like my numeric functions to work for anything that supports the basic arithmetic operators (i.e. +, -, *). Unfortunately that is not possible in .NET at this point in time. In essence you have to write your code twice as below.
static double SumDouble(double a, double b) { return a + b; } static decimal SumDecimal(decimal a, decimal b) {return a + b;}
Granted, this is not a good state of affairs. We often discussed how to make it work, but we couldn’t find a solution that was both fast to run and cheap for us to implement. More often than not we speculated about having the numeric types implement a specific INumeric interface and add a generic constraint to the C#/VB languages to make it work. Hence the title of this post.
With we implemented dynamic in C# 4.0 it occurred to me that you can fake your way into writing your code just once. For sure, this solution doesn’t have the same performance characteristics of ‘writing your code twice’, but at least it doesn’t duplicate your code.
This is how it looks like:
static dynamic Sum1(dynamic a, dynamic b) { return a + b; }
The call to the ‘+’ operator is resolved at runtime, by the C# binder, hence a performance penalty is incurred. The penalty is less than you might think, given that the DLR caches things under the cover so that no v-table lookup is performed the second time around. The whole thing is explained in more detail here. But still, it is not as fast as a normal ‘+’ operator over a primitive type. I’ll let you enjoy micro performance testing this one 🙂
A slight refinement is to make the code generic so that a caller doesn’t see a signature with dynamic types as arguments.
static dynamic Sum2<T1, T2>(T1 a, T2 b) { dynamic ad = a; dynamic bd = b; return ad + bd; }
I could make the return type generic as well, but that would force the caller to be explicit about the types, making the calling code much less readable. The other good thing about this signature is that you get a different call site with each combination of type arguments and, since they are separate, the binding caches should stay small. With the former signature there is only one call site and the cache could pile up to the point where the DLR decides to discard it.
Here is how the calling code looks like right now:
Console.WriteLine(Sum2(2m, 4m)); Console.WriteLine(Sum2(2.0, 4.0)); Console.WriteLine(Sum2(new DateTime(2000,12,1), new TimeSpan(24,0,0)));
Yet another way to write this code is as follows:
public static T Sum3<T>(T a, T b) { dynamic ad = a; dynamic bd = b; return ad + bd; }
This gets around the problem of showing a dynamic return value and give you some more compile time type checking. But it prevents summing not coercible types. The compiler doesn’t let you get there. The last line below wont’ compile:
Console.WriteLine(Sum3(2m, 4m)); Console.WriteLine(Sum3(2.0, 4.0)); //Console.WriteLine(Sum3(new DateTime(2000,12,1), new TimeSpan(24,0,0)));
Also notice that in VB you could have done this a long time ago 🙂
Function Sum(Of T1, T2)(ByVal a As T1, ByVal b As T2) Dim aa As Object = a Dim bb As Object = b Return aa + bb End Function
In summary, by using dynamic you can write your numeric code just once, but you pay a performance price. You are the only one who can decide if the price is worth paying in your particular application. As often, the profiler is your friend.
0 Webmentions
These are webmentions via the IndieWeb and webmention.io.
30 Comments
Comments
Simulating INumeric with dynam
2009-02-05T14:01:59ZPingBack from http://www.clickandsolve.com/?p=3995
David Nelson
2009-02-06T00:06:03ZYou don't just pay a performance price, you pay a usability price. With the duplicated version, the compiler verifies that the types you are using are correct. With the dynamic version, the compiler can't verify anything. What happens if two types are used which do not have an addition operator defined? A runtime exception occurs. What happens if this call exists in a code path which allows many different objects to be supplied to the dynamic function? The chances that any amount of testing will find all the potential runtime errors in such a circumstance are slim; whereas the compiler could have found the problem immediately.
The lack of a common interface for numeric types has been a top-rated issue on Connect for 5 years. It is clearly very important to the community, and yet there has still been no solution implemented. Frankly I am baffled by this.
Pop.Catalin
2009-02-06T04:16:24ZI think that this is something that dynamic wasn't meant for. In VS 10 we will also have F# and I'd rather use F# for this kind of things to have better performance and type safety. Actually I'd be more than happy to use F#, it's programming model fits better with computational code.
This seems like playing at the edge of C#. C# was never designed for numeric programming an will probably stay that way for the foreseeable future. Which is a shame, because it's seeing more use in games, 3d applications and computation intensive applications.
lucabol
2009-02-06T10:45:52ZDavid: agreed with the comment on the need for INumeric. Agreed with the pitfall of run-time errors vs compile time errors. I thought I said it in the post, maybe I should have said it strongly.
Pop.Catalin: F# doesn't help in this scenario. You still need to write duplicate code.
Justin Etheredge
2009-02-06T11:28:42ZOf course there are issues with this implementation, but what you have done is take what you have and used a creative idea to get around a problem that has existed in C# for a long time. I say you have done an excellent job!
Qwr
2009-02-07T12:57:27ZVery simple answer:
Generics are broken in C# and CLR in general.
That ought to have been fixed before any release.
Two, it just reflects different number systems, and decimal won't cut it just the same (not to mention the perf penalty).
C# is unusable for numeric programming (without being dog slow and full of hacks), because its model (in general Java VM model) is broken not in usability, but in implementation of memory-abstracted-awayness, its idioms that are very weak, and generics (both in language and runtime). Quite obvious really.. just ask experts that do numeric libraries in C++ for your local government in the past 20 years..
lucabol
2009-02-09T14:29:39ZQwr: can you give me more details on the points you raise?
Apart from the lack of an INumeric interface, I'm not aware of any other big issue with C#/CLR for numeric computing.
Notice that PInvoke and unsafe features in C# helps you in these (hopefully limited) cases when you need direct memory access.
Qwr
2009-02-15T17:25:14ZSure, if you attempt to play with generics, you will see that substitution features are broken. Meta-programming is impossible, policy-based design is hard and bloated etc. Word 'hack' is all over IL :)
That's the language designers fault and yet we are at 4.0 and still away from programming the compiler (but C# is looking more like modern C++ by the day, hmm).
The unsafe and managed mix, in terms of memory brings another dimension to the problem which is related to complex data flow and almost guaranteed memory leakage ie. it is impossible to get the deterministic behaviour that is essential for large datasets. Moreoever, everything in CLR 3.0+ is so heavily object-penalised that even before a large dataset appears overhead is already huge and subpar to anything in the industry (WPF or large apps for one).
Not to mention copy-construction hacks.. and so on. Numeric computing in Java and CLR is so far away from hardware reality of today it simply cannot be taken seriously in any HPC environment or other non-hobby work.
And if we are to believe the funny 'managed' speak, what on earth is the point of PInvoke or VM sandbox/security approaches anyway.. for numeric computing, games, video, audio, real-time apps, and much more, CLR just doesn't have an appropriate solution without some sort of unmanaged solution or hack in sight.
This is all due to Java+Delphi influence if you ask me, but we told you so 10 years back when you completed the first beta, yet no one listened. It is surprising to see that even 'using' cannot be done twice over an indirection and the language designers keep talking of meta features, compiler extensibility ( CodeDOM Nulls? ) and so on..Seems to me C# is becoming a toy language really.
It just doesn't tally.. A type system with single-inheritance, no mix in support, object approach, is like placing a void* everywhere in numeric and other disciplines.
Why doesn't someone stand up and tell that serious computing is not for the guys that will keep pushing language integration and popularity and simplicity against proper engineering?
Ree
2009-02-15T17:28:48ZAnd of course, generics with value types are so limited it isn't funny anymore.. how did that get pass any standard (ECMA or internal) is beyond me.
Numeric computing is happening elsewhere and your competition is making better tools in native and symbolic space.. I have to say it, although I would have liked for MS to take it on seriously (and it is a serious discipline and work, not ASP.NET or some such).
dfg
2009-02-16T15:48:00ZHello Luca,
Very interesting article. I like the idea of an INumeric interface; if nothing else at least it would save me from writing tons of duplicated code.
Now, my two cents (a couple of thoughts I'd like to share with everyone):
In my view, the underlying problem that I think INumeric tries to solve, or at least alleviate (i.e. closing the gap between numbers and 'numbers as computers understand them'), has much deeper roots. It seems to me that at the dawn of computing science, someone decided than a few millennia of Mathematics were really not that important and that, in the computer's world they were creating, the following was going to hold true:
1/2 = 0
and yet
1./2 = 0.5
The above is just mathematically absurd and yet it 'propagated' everywhere; for 60+ years (and counting) computer scientists have lived happily with this. Now combine it with:
1./0. => "A division by zero error has been caught."
???
So, I can't divide by zero in the field of real numbers - which makes perfect sense - and yet, magically, I can divide by two in the ring of integers. Simply put, this breaks some of the most basic principles of Mathematics. In this example, I can think of at least three possible options:
1/2 = {1,0} => one cake for two guys and you assume you can't cut it: one guy gets the cake, the other one gets nothing (I don't like this idea though).
1/2 = 0.5 => you promote the arguments to the (smallest) set where the operation is defined and return the relevant element from that set (I kind of like this one).
1/2 = "Error! Division by a non-invertible element." => in reality, the exact same stuff that is reported when dividing by zero (I like this one).
What seems totally unjustified to me is to return 0 just because it happens to be the integral part of 0.5.
I'm aware of the fact that the numeric representation of a number in a computer is finite, but I don't think that's reason enough for this state of affairs. In my opinion, a higher level of numerical abstraction is needed; one that would serve as a 'bridge' between numbers - in the Mathematical sense of the word - and 'numbers as understood by computers'.
I find it intriguing that every single aspect of hardware and software has evolved at an incredible speed and yet, to a large extent, things like the way we define, store and manipulate numbers in computers seem to have frozen the very day those mechanisms were first defined.
Finally, I do think that this relates to INumeric: on top of more solid foundations one could build up to the point where INumeric is not needed anymore, simply because the concept is already there implemented at a much lower level (but this post is too long already).
Regards,
dfg
lucabol
2009-02-16T17:54:46ZThese are very good thoughts. Thanks for sharing.
dfg
2009-02-16T20:22:10ZHi Luca,
Thanks; always glad to contribute in the little I can. Here's another thought; luckily this one a bit more pragmatic.
What does INumeric look like in your mind at present? Would it allow me to write (in F#) something like this (for simplicity, I'm leaving out left-side operations with an INumeric):
type Complex =
{ Re: INumeric
Im: INumeric }
static member ( + ) (left: Complex, right: INumeric) =
{ Re = left.Re + right; Im = left.Im }
static member ( * ) (left: Complex, right: INumeric) =
{ Re = left.Re * right; Im = left.Im * right }
If this is the idea, then I suppose I'd have two options: either Complex implements INumeric or it doesn't. If it doesn't, then I'd need to add to the code above other methods like
static member ( * ) (left: Complex, right: Complex) =
{ Re = left.Re * right.Re - left.Im * right.Im ; Im = left.Re * right.Im + left.Im * right.Re }
On the other hand, if Complex implements INumeric, then I suppose I could write just one method:
static member ( * ) (left: INumeric, right: INumeric) =
with two branches: one for Complex and another one for all other INumeric types.
Having Complex implement INumeric would be a great advantage because then I could create a new type, say, Matrix, taking INumeric as the entries, and complex entries would be automatically considered. But in this case my question is: How would the (+) operation be resolved without some sort of hierarchical approach? For example, suppose that I create a new Matrix type implementing INumeric and I add to that type another
static member ( * ) (left: INumeric, right: INumeric) =
detailing matrix multiplication as well as (element-by-element) matrix-times-scalar multiplication.
Next, I type
let B = A * z;;
where A is a matrix and z is a complex. Which of the two methods would be invoked? Since I'm the one implementing INumeric, how would the framework make it possible for me to guarantee that the Matrix method will be invoked? The problem is that if the Complex method is invoked instead, then that forces me to duplicate code for Complex-Matrix multiplication (note that when I created the Complex type, Matrix did not exist yet).
I suppose it would be very useful to allow some kind of hierarchy (or some other approach to user control) over the call resolution, because then I would not have to duplicate code anywhere (neither for primitive types nor for my own types implementing INumeric).
Best regards,
dfg
PS.- I'm not sure if what I'm suggesting is already possible in similar contexts.
dfg
2009-02-16T22:15:42ZI forgot a couple of things:
- I'm aware that one can avoid problems by carefully using a unique non-static method:
let B = A.Times(z);;
I'm just curious to know how far flexibility and code uniqueness can be taken when using operator overloading instead.
- Is it correct that it is recommended to keep operator overloading in .NET to a minimum?
Thanks in advance,
dfg
Melitta Andersen
2009-02-24T04:42:51ZHi dfg,
I'm Melitta, a member of the Base Class Library team, which would own the INumeric feature. I have a couple of answers around what we've been thinking. We don't have all the details and all of this is of course subject to change.
Currently our thinking has been along the lines of an INumeric<T> that simply guaranteed that a type had particular methods. So if you were to implement a generic Complex number as in your example, it would need to specify that both Re and Im were INumeric<T>. INumeric<T> would have methods like Add(left: T, right: T) that would return type T. Then you could perform the operations on the elements themselves, using the standard formulas for complex arithmetic. You would end up with a static member ( + ) (left: Complex, right: Complex) = {Re = left.Re + right.Re; Im = left.Im + right.Im} instead of static member ( + ) (left: Complex, right: INumeric). And then you could have your Complex structure itself implement INumeric<Complex>, and it could be used in larger structures like Matrices.
We haven't been thinking of INumeric as a way to automatically interact with any possible numeric type. You still have to specify the T. This means that you still have to determine which other types your type will interact with and cast to implicitly or explicitly. In your Matrix example, if Matrix implemented INumeric, the multiplication function (it could only be an operator if interfaces allowed static methods, or if the compilers knew to compile the operator down to a particular instance method) would only multiply two matrices of the same type. So if you wanted to multiply a matrix by a complex scalar, you'd have to implement that specifically (and not call it through the interface, unless you found a way to treat complex scalars as matrices). However, INumeric may help simplify the task of multiplying a Matrix<T> by a scalar of type T.
As for your question about operator overloading recommendations, you may want to check out our Framework Design Guidelines on the topic: http://msdn.microsoft.com/en-us/library/2sk3x8a7.aspx.
Thanks,
Melitta
Base Class Libraries
dfg
2009-02-25T00:24:51ZHi Melitta,
Thanks a lot for taking the time to describe the INumeric<T> plan. With regards to static methods in interfaces, I found this:
http://dotnetjunkies.com/WebLog/malio/archive/2007/03/01/204317.aspx
Thought you might find it interesting.
Thanks and regards,
dfg
Adam Pursley
2009-02-26T00:45:59ZI'd like the ability to declare my own interface and then declare that other classes that I don't control implement my interface, provided of course that those classes actually do have the appropriate methods/properties.
If I could do that, then in this situation I could possibly define my own INumeric<T> interface and declare that various primitive types do implement my interface.
I think extension methods introduced in 3.0 was kind of a step in this direction.
In general that would allow the consumers to identify similarities between distinct components and pull them together without having to wait for the owners of those components to enhance the library with common interfaces in a future version.
Think about the common interfaces in the System.Data namespace in .Net 2.0. We had to wait for it to become part of the standard framework in 2.0, even though we could already see the similarities in .Net 1.1 between the components in the OdbcClient namespace classes and the components in the SqlClient namespace.
Marc Gravell
2009-02-26T02:48:36ZNote that you can get a lot of this functionality *today* without using "dynamic". One approach is to use Expression as a micro-compiler to do the duck-typing, and cache the delegate away. This is precicely what the generic operator support in MiscUtil does.
See here for the overview:
<a rel="nofollow" target="_new" href="http://www.yoda.arachsys.com/csharp/miscutil/usage/genericoperators.html">http://www.yoda.arachsys.com/csharp/miscutil/usage/genericoperators.html
or here for the actual code:
http://www.yoda.arachsys.com/csharp/miscutil/
Thomas Eissfeller
2009-02-26T05:39:36ZJust a comment on performance:
I implemented the BLAS function daxpy in different languages. And see what performance I got on my workstation (Intel Q9550 CPU, everything's running single threaded in 32bit). n = 1000000 and performance is averaged over 1000 runs. Each addition and each multiplication is considered one FLOP. If you want to reproduce the results, make sure you start the programs without attaching a debugger.
------------------------------------
Visual C++: 398.96 MFlops/s
void daxpy(int n, double a, double* xp, double* yp, double* rp)
{
for (int i = 0; i < n; i++)
{
rp[i] = a * xp[i] + yp[i];
}
}
------------------------------------
Intel Fortran: 557.41 MFlops/s
subroutine daxpy(n,a,r,x,y)
integer (kind=4) :: n
real (kind=8) :: a
real (kind=8), dimension(n) :: r,x,y
integer (kind=4) :: i
do i=1,n
r(i) = a * x(i) + y(i)
end do
end subroutine daxpy
------------------------------------
C# (Microsoft CLR): 399.43 MFlops/s
private static void Daxpy(int n, double a, double[] x, double[] y, double[] r)
{
for (int i = 0; i < x.Length; i++)
{
r[i] = a * x[i] + y[i];
}
}
------------------------------------
The JIT compiler team did a great job. Only Intel Fortran outruns the CLR code. The nice thing about c# is that boundaries checks come for free. However, the major drawback is that CLR arrays can't be larger
than 2GB.
Dick
2009-02-26T09:38:02ZThomas,
>Only Intel Fortran outruns the CLR code
Did you use autovectorization with the Intel compiler?
This one place where the the CLR really lacks. MS should implement something like Mono.SIMD and Intel's autovectorization for C#/F#.
dfg
2009-02-26T10:04:56ZIn my view Adam's comment above is in the right direction and very much what I've had in mind for a while now. I wouldn't go as far as to always force the use of an interface for that though. I think the architecture would benefit from the equivalent to the mathematical concept of "category":
http://en.wikipedia.org/wiki/Category_(mathematics)
Or, in Adam's words:
"allow the consumers to identify similarities between distinct components and pull them together"
Roughly speaking, the same underlying idea.
In some aspects, these "categories" could be seen as a light-weight version of the concept of interface.
But I suppose coming up with a concept is one thing and implementing it in the architecture is another thing. There I can't help much, but I'll elaborate a bit more on the idea in a later post.
R&#252diger Klaehn
2009-02-27T11:50:57ZWhat is really needed is something like a structural type constraint. See this feedback item for how this would work
https://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=412848
But I must confess that after five years I have given up on C#. The language is just getting more and more bloated while still missing essential features.
I am trying to convince my boss to write the next big project in java with the more numerically complex parts written in scala.
David Nelson
2009-02-27T22:50:08Z@Rudiger
You're giving up on C# and switching to JAVA? Talk about taking a step backwards...
dfg
2009-02-28T01:24:27ZRüdiger:
"structural type constraint"?
Do you mean as in "using generics but combined with some stuff that makes generics not be so generic?"
I guess my question is: If generics have to come with some stuff to not make them so, say, "generic", then what's the point in using generics in the first place?
I mean, what's next, people using generics to
represent 2 + 2 = 4 using generics just for the sake of using generics?
R&#252diger Klaehn
2009-02-28T04:46:27ZRe: David Nelson
At least with java you know that they won't add a dozen superficial language "features" for each release. C# has become much too complex and non-orthogonal.
But the language I am planning to use for the more complex algorithms is scala http://www.scala-lang.org/
The common base classes and interfaces will be written in java since that is the lowest common denominator for the java platform.
Re: dfg
A structural type constraint is not less generic than an interface constraint. It is just a different approach to generics.
David Nelson
2009-02-28T14:52:58Z@Rudiger
To each his own. Yes C# is continuing to evolve, and yes there is a lot to keep up with, but personally I am glad to be using a language and a platform that is still trying to keep up with the needs of modern developers, rather than one which has resigned itself to living in the past.
Paulo Zemek
2009-02-28T16:56:59ZRüdiger Klaehn, I will not say that Java has any advantages over .net than being multi-plataform, but I really liked your proposal for strutural constraints.
I, for example, always liked the C++ template because I could create a template for any class with a GetName() method.
I really liked your solution. I expect the .net team can use some structural solution like yours for generics.
But, for Luca, I liked Luca post also. Luca intended to show how dynamic could be used, and this has been done successfully.
I will really like to see numbers of performance comparison using dynamic and real primitive types.
R&#252diger Klaehn
2009-03-01T06:25:10ZRe: David Nelson
The language I am going to use is not java but scala. We are just using java for the common interfaces to ensure interoperability.
I have nothing against adding features to a language, but the features should be general purpose features and not just special syntax to address a special use case.
For example, instead of providing special syntax for nullable types, they should have made generics more flexible so that adding special operators for nullable types could be done in a library.
And don't get me started about the new collection initializer syntax. It uses structural typing (a class that implements IEnumerable and has an add method is assumed to be a collection), but it does not provide a generic mechanism for those of us that would like to use structural typing for their own purposes.
Re: Paulo Zemek
I would love to do some benchmarks. But is there a version of the .net 4.0 framework available that is not obsolete and does not require virtual pc? I did not find one.
Eduard Dumitru
2009-03-02T11:46:38ZHello everyone,
First of all I would like to thank Anders Hejlsberg for existing, on behalf of the people who think, who design solutions and project them into reality and also thank Eric, Luca, Charlie, and everyone in the Visual Studio, .NET, C#... teams.
I don't actually understand the term "programmer" but, I have been programming since the 4th grade (started with BASIC on a Z80 computer, continued with Pascal, C++, moved to Visual Basic, Visual C++, Delphi, Assembler, (the order is just temporal, there is no actual logic in it), Delphi for .NET, Java, Prolog, C#, F#, Javascript, Python, Ruby ).
At first I lacked a stack (there was only a GOSUB routine which had a 1-length "stack"). Then I lacked memory, loading (of binary modules at runtime). Then I lacked a Garbage Collector. I'm not saying that the things I was looking for weren't out there, somewhere, but they surely weren't in the possibilities offered by that particular language. It is maybe the first time in my life when I am waiting for the next release of a framework, knowing what will be in it and everything I want to use as a language is not yet made.
I'm not trying to say what is good or bad, in general. There are many paradoxes in the human - computer communication that we call programming (it is good to have GC / it is better if I'm allowed to destroy things as I please).
The reason I dared to be so idiosyncratic when writing this comment is that others also dared. I don't really want to read these blogs to know what people choose for a language, what projects they are working on. I'm not sure what the purpose of these blogs are but it is my believing that it has nothing to do with the peculiar tastes of the readers.
I'm only writing this because a disturbance was made (in the Force :)) and I believe all things reside in symmetry.
In my opinion, the power of C# stands not in the power to compute large sets of numbers (I would probably use PLINQ somehow to ask a number of processors to do a lot of work, or will make a different executable process and connect to it through I/O, or a different module and load it through "PInvoke Loading"), but rather in the elegance and simplicity of thread-flow and heap-state description. Please don't be fooled by my passion and think that I cannot synchronize threads in C using POSIX, or don't know how to throw an exception from a Java method that states it does not "throws" any.
I think it's all about the maintenance of your ideeas while coding. I'm sorry to here that things like type inference cause a rash to some who appreciate multi inheritence in contrast. I don't think there's any doubt that reflection is a good thing (I mean in general, in humans, in poetry). Well Type is a great class (check it out if you haven't, I mean really check it out, see when instances are created and what happens with all the threads).
And for Java lovers who think generics are better in Java because of straight-forwardness I have two small tests:
1. Try to infer on the generic type at runtime.
2. Try to declare a generic type particularization in process A, use I/O to serialize and send it to process B and deserialize it there (and of course, don't mention the generic type particularization syntactically in program B). I wonder what will happen.
Please forgive me if I am wrong, but I suspect that those who said that C# is evolving too fast never got to understand it as a whole. The evolution of C# is normal and it is hard to accomplish. The reason Java is not evolving (from within the core) is because it cannot, not because they don't want it to.
They have made a series of bad choices and are now stuck (they could either evolve and loose compatibility with tons of software already made and tons of knowledge that is within programmers' heads).
You can only go as so far with the evolution as you can. And it is the "childhood sins" that keep you from going any further.
C# is wonderful for me. In my case it is the best compromise between speed, expressivity, maintainability. Please don't be fooled and think that I appreciate the libraries that are pre-written so dearly. I do. But I appreciate the language and the framework the most.
It approaches the power of Javascript and Python from a strongly-type, highly aware of what IS, perspective.
I don't think the problem of programming should be so highly bound to the engineering issues (the processor, memory, etc). I'm saying this and I am a computer engineer.
It should, in my opinion, be agnostic (in the sense that the compiler, the runtime and maybe part of the libraries, are taking care of those things). Isomorphisms don't always add value.
Thank you for reading this chunk of personal beliefs. I am looking forward to the comming of C# 4.0 (already got the CTP machine :)).
Have a nice day everyone,
Eduard Dumitru
Please excuse my looon comment,
and my english spelling.
lucabol
2009-03-02T12:33:15ZI found interesting that we have a similar situation Java <-> Scala, with (VB/C#) <-> F#.
ccb
2009-03-04T11:22:35ZRE: dfg
I am a little late to the party here, but to the point made by dfg about "violating the laws of mathematics", I think I can see a fourth option that could be useful.
For 80x86 CPUs, I believe the division instruction puts the result of the division in one register and the remainder in another. The problem with integer division in high-level languages is that we are only returned the result, and the remainder is lost. If I am not mistaken, the modulo operator is exactly the inverse case, we are given the remainder, not the result, even though at the hardware level, a division operation was still executed.
I think it would be possible to capture the remainder value and save it as a property of the integer variable (at least in a managed language). I am not sure how this is impacted by integers being value types in .Net.