Oh that’s genius :facepalm:
And there in lies the problem I guess. Wrong tool for the job, but also the only one that can be used.
Oh that’s genius :facepalm:
And there in lies the problem I guess. Wrong tool for the job, but also the only one that can be used.
It was designed by Netscape for front-end web page scripting. Whether it did a good job is questionable. What can we expect from a language that was developed in just 10 days? Did you program in JavaScript in the dark ages when we had to always test using at least 2 different browsers? It was annoying to have to write code that would work on all of them. We had to often write workarounds just so it would work in a particular browser even if it worked fine in the others. I hated it. Yet now I’m deeply immersed in JavaScript for what it was not originally designed for - server-side scripting with high-frequency database access and monetary calculations.
That’s right - it’s a bad language yet we are all stuck with it because it’s the only standard scripting language with the universal browser support. This is one of the points contributing to “the sad state of web development”.
@tab00 have you looked into the Elm language at all? It’s strong typing and ML syntax remind me of F# for the browser. The only cons is that it doesn’t inter-opt as nicely as typescript
You are mixing up the language with the ecosystem and make judgments on wrong premises.
Watch this
- YouTube
Use this (or any other)
http://mathjs.org/
( type 0.1 + 0.2 != 0.3
into the demo )
Use TypeScript
http://www.typescriptlang.org/
Eventually dec64 will make it into ES but also, asm.js and web assembly are coming. That will further blur the line between subcomponent routines and the application layer glue for which JS is well known for.
Does TypeScript have a decimal type appropriate for monetary calculations? I could only find the number type in the language specification.
I know that there are libraries / packages in the JavaScript / TypeScript ecosystem. Maybe you missed my post above in which I said that I chose to use BigNumber. So I have not mixed up the languages with the ecosystem.
My point is that it would have been better to have a numerical data type appropriate for precise monetary calculations as part of the language instead of needing to rely on libraries or packages.
Crockford says that using the floating point type that JavaScript now has was the wrong language design choice because of the imprecision, but it was done because other programming languages at the time used it too. But around a decade ago Java added BigDecimal and ASP.NET added decimal to deal with the imprecision problem. Nothing was done to solve it in JavaScript and it appears that TypeScript has not solved it either.
His Dec64 solution could be great but we don’t know how long it would take for it to be available for us to use.
Agree to an extent, but there are usually well-crafted reasons for these things to have existed in the first place.
Node.js had a pretty good reason for being made; now engineers could use a single language on the front end and the back end. Whether or not it worked out well in practice is a cause for debate, sure.
JavaScript, as a language, is vastly improving. I would say at es7 it will be a full-featured enough language where it does not need excessive added modules. You can make a whole application completely without adding much modules except for a database.
That’s why I disagree with him about babel. Babel is fantastic, and appears to be made to be very modular. That’s why it comes with little out of the box – you gotta add what you need and only what you need.
Also, my username is corvid, does that mean by definition I am a magpie developer?
No. I had a quick look and there is a float type but it doesn’t look like there is a decimal type for precise monetary calculations. Is such a thing an odd requirement? With ecommerce continuously growing and the internet being increasingly used for financial applications, I would not have thought so.
That’s what I mean. You say JS does not come with xyz out of the box, as a drawback to the language. If we were talking about PHP and the mess on “functionality” that was added over time, we were talking about how .NET might not be so bloated. JS, in fact, has a very small footprint where functionality is added via the ecosystem.
I.e. TypeScript - as the name implies - is only a typing superset. It is not meant to advance the handling of binary computations. In the same way, many other options exist - as you have found - for the float-point issue you have, that don’t advance the typing system.
So you confuse language features with the ecosystem (i.e. libraries): While float point is a long known problem that has been solved many times, for which many solutions exist, you try to weigh “this flaw” to the penetration of JS. A pretty bad way of evaluating a value proposition.
JavaScript uses float point. Does that mean you can not build 100% accurate, real-time Business Intelligence systems? No. In fact, we do, where only a small fraction of code is C/Python for machine learning while everything else is JS.
Though I could find many arguments from both a technical and business perspective where .NET/#C/Java would have been a nightmare to achieve the same.
A valid argument “against JS” (like there would be an alternative ) is probably the fragmentation of libraries (the ecmascript committee, npm and meteor are a solution) and the ultimate native speed of JS JIT (for that asm / web assembly is coming).
I’d be genuinely interested to hear some of those in a bit more detail - for a newcomer to JS (like me) it can be difficult to see the advantages when you are used to another way of working. It would be really helpful to get an idea of some the positives and feel like there’s something to look forward to!
There’s much less of a distinction between language and libraries in .net though I guess as C# and .Net are effectively so tightly interwoven. It’s interesting that .Net Core starts to reduce some of .Net’s bloat though.
There is a difference between being solved by the language designers / developers and being “solved” by the
ecosystem. Using libraries / packages for this problem is really just a workaround to a deficiency. It’s nowhere near as convenient as native support by the language.
e.g. with BigNumber I get a new type called BigNumber
. But other packages that I use don’t know what a BigNumber
type is, which has caused problems that I have to try to work around (e.g. write transform functions to convert plain object to BigNumber
and vice-versa). If there was such a BigNumber
type in the language itself (like decimal in .NET) then all packages would know what a BigNumber
is and I wouldn’t have any issues, and we wouldn’t be having this conversation at all - it would all just work.
Well, there is a pretty big long tail but technically
and from the business perspective
Well, I am not sure where the difference is but if you use a package that does calculations of monetary transactions (for example), than I would expect it uses a solution for the problem. Much like a library picking the right type in a strongly typed language or actually instrumenting decimal in .NET.
The point is, that clinging on the float point flaw, which yes, is a flaw, is only remotely related to what you can actually achieve in the JS world. It may become more convenient in a future ES version but you certainly don’t depend on it.
As I’ve just said, other problems arose after using the BigNumber package that works around JavaScript’s floating point flaw. e.g. other packages treat a BigNumber
object as a plain object, which then requires extra work to handle. So I guess it is also a problem of lack of support for custom types.
Well, in that particular case I can not really help you because I neither know the package nor have extended experience with BigNumber. But in general, a float point critical asset should either allow you to define the math library or have an internal solution.
Also keep in mind, float point only means delicacies for specific cases of decimal operations. Working in integer (transposing down to the smallest unit; i.e. working in cents instead of dollars) is a common practice even with strongly typed languages and also all you need to do, solving most float point operations.
In that regard, stack overflow is full of advice and from my experience, chances are often high, that many problems do not even need a math library.
Yes, I know of that approach and it appears to be a frequent suggestion. It would be a simple solution if I was just working with one currency, or currencies with the same number of decimal places. However, I am working with currencies that have different numbers of decimal places, which would complicate things if I were to try to use integers only.
Just the fact that the financials sector is the largest sector in the U.S.A. stock market by capitalization (and has been for most of the past decade) (currently worth 5.81T USD (33% of total market) as shown at Fidelity Investments Sectors & Industries Overview) should have been enough of a reason to add support for precise monetary calculations in a computer language (as has already been done in .NET and Java around a decade ago). Maybe language designers (or software developers in general?) simply don’t take much notice of the financial markets.
Now that JavaScript has become so popular for internet applications, something needs to be done about this deficiency to cater for the increasing demand for financial applications on the internet. Third-party libraries / packages is not a proper solution.
It would be better to fix this late than never.
So one thing we can conclude: Do not use floats for money in any programming language. The calculations are incorrect and if you have a currency with more then 2 decimals you have a problem. A friend of mine worked at a bank startup and he confirmed that floats are never used there to represent money.
@tab00 has a point that JavaScript doesn’t solve this issue in the language itself, like Java and .NET do, with BigInt
or similar. So it would be great to see this some day.
In the mean time I do think that this can be solved quite easily. For instance you can use microcents. This means that as where 100 cents represent one dollar, now 1000 microcents represent one dollar. This solves the issue with currencies that have 3 decimals. Like this example shows, you can easily add even larger numbers for more precision.
Here is an example of how Google AdWords does it:
Fields of type Money are returned in micro currency units (micros), e.g.: $1.23 will come back as 1230000 (1.23 x 1,000,000).
See: https://developers.google.com/adwords/api/docs/guides/reporting-concepts#money
The only place that the amount of decimals really matters is when showing them in the interface (correct me if I’m wrong). This can be done in an elegant way using the numbro library:
function formatMicroMoney (amount, cultureCode) {
numbro.language(cultureCode);
return numbro(amount/1000).formatCurrency();
}
A very basic example but much can be achieved with this.
Calling a language bad because it misses a feature would have made Java bad for a very long time because it was missing Lambdas, while Javascript had them from the start. Well, guess what, Java was still pretty good without them. So is Javascript still a briljant language without BigInt or strong typing, but it could improve if it had those.
Can we conclude that all languages have pros and cons and that you always have to work around the quirks?
This says it all - " and my users don’t care that my code as been refactored" - I built something and then things kept changing, debates, rumors (we’re dropping blaze, we’re not dropping blaze, re-build everything in react…ummmm, we’re moving to NPM, hey here’s this GraphQL / Apollo thing over here!). So I gave up. Too bad too because I had people using my app but I could no longer support it.
I’m going to lock this thread because it’s over 6 months old.