Well ... interesting Meteor numbers in the State of JS

I think this is like asking npm.com to invest $ into bettering the NPM packages because you are dissatisfied with the updates or features…

As I see it there are two ways here: either you do not understand open source or if you do, you might be interested to “Stop using free resources and invest some money into it yourself. There’s no such thing as a free lunch”

Yes, I am also expressing my opinion and I happen to disagree with you but honestly I have no issue at all with having different views, I think we are all here for more or less similar reasons and I know run a business. Also, Meteor forum is not shy of negativity at all.

As for the survey, it is possible (and frankly likely) that the sample is biased. There are more than 12 million JS developers, and the survey is hitting around 20 thousands, so around 0.2%. If you ask the same 20k every year, you will probably get similar result. It also a form of freedom of expression to be a bit skeptical of the methodology. But, I also applaud the affort by the survey team, I’m just speculating about the numbers and I wish we have more drill down on the data. There is always room to improve.

That’s not how you assess if your sample is big enough to publish a survey. 20k is more than enough to reflect a statistically sound base of the JS community.

But this is going OT

1 Like

Judgement is almost always biased, whether it is about people or things. People, and that seems to be rooted in human nature, have polarized opinion about everything. Objectivity is a myth.

So what usually happens is that things or people seen as somewhat better than average usually get all the credit and enthusiasm, while things and people perceived as being below average are getting dragged down. “Good” doesn’t need to be objectively good, it’s enough to be seen as fancy or trendy or funny or lovely. That’s all very sad, but that’s how things work.

Is Meteor now fancy or trendy or funny or lovely? Well, ours is an infantilized world, let’s face it.

7 Likes

First of all, do you know who Tiny Capital is, the owner of Meteor?

Let me help you: Tiny Capital - Crunchbase Investor Profile & Investments

This is from their “About me” section: “Tiny Capital makes investments in both private and public companies and have a base of permanent capital from a family office.”

It’s a venture capital company that acquired Meteor Development for an undisclosed amount on 2nd of Oct 2019 (as per Crunchbase, I didn’t fact check that):

So a VC is now considered the gold standard for an open source project sponsor?

I think it’s you who misunderstands something here about the nature of a VC.

NPM was acquired by Github which was acquired by Microsoft. Last time I checked, Microsoft isn’t a VC.

You’re comparing apples with oranges.

Yes, I stand by my earlier statement. It’s in the nature of a VC’s to milk the cow and buy/invest into little puppy cows as well as half dead cows as long as they think they can make money from that investment, a quick ROI I might stress.

Now you still might think that a VC is the right project sponsor for Meteor but I also can still strongly disagree with that :wink:

Meteor is an open source platform regardless of who owns it. Sorry, I’d love to participate more to this conversation but I am a software developer and not so good with politics. I’m a bit busy running my business and contributing back to this community whenever I can.

Never in my life got so much intelligence and quality for free. 15 years back I’d pay even to convert JPEG to PNG… Never really had time to care about who Tiny is and what it is doing…

3 Likes

0.2% is low number and there are many potential biases and it is reasonable to dig down more about who those 47% percent are and whether or not the sample is representative. Are they the same group? Are they different from last year? Did they try the framework within last 5 years?

It is reasonable to double down on this group and figure out what’s exactly is happening. Why should I take the survey on faith or on face value? Wouldn’t it be reasonable to further drill down?

1 Like

How to choose a sample size (for the statistically challenged)

http://www.tools4dev.org/resources/how-to-choose-a-sample-size/

Thanks for the link but I studied software engineering with master degree. I know how to conduct basic survey and I have seen many that could go wrong.

Also from your link

“A good maximum sample size is usually around 10% of the population”

We are taking about 0.2% here!

1 Like

We should get Elon Musk to casually mention that for one of their systems was developed using Meteor and that he is very convinced of it. Next day Meteor would start to skyrocket for several years to come.

4 Likes

Whoever shares a joint with him and is also a member of this community should certainly ask this “tiny” little favor the next time they hookup :wink:

4 Likes

I totally respect your opinion, yet I’d like to offer another approach.

Rather than trying to probe the survey in order to prove that it is in fact irrelevant with respect to Meteor, we should maybe try finding creative ways to make Meteor, for the lack of a better word, sexy. Meteor is great in so many ways. What’s not so great is its public image. So let’s change people’s minds! It’s all about propaganda, even though this word may sound a little awkward.

(edit: grammatical error)

5 Likes

I agree with you for sure, but I think Tiny team should also try to investigate the data behind the “Not Interested” category, I am really curious about the drill down for this group.

Also, the sample size is critical, if there are many new developers who actually never heard of Meteor, then the go forward startegy would be very different then trying to change the opinion of a biased sample.

That is why I am being careful on how to interpret this result. Either way we should make Meteor sexier for sure :wink:

I am writing an article about this. It’s not complete yet and I’d like to ask you to read it (~5min) and add some suggestions so we can display the whole situation from our point of view (not only mine):

https://dev.to/jankapunkt/do-not-avoid-meteor-23kb-temp-slug-3727711?preview=2cd531d50bece506cc19493521eb69a91f693cc6940a567b08cee2f78aa01cb9ad75cc5c0b8780eae2c0d2b3c39f185b353f8fead1816a51d57977c2

Edit: please also let me know if there articles that you or others wrote that should be linked there, too

9 Likes

Of What I remember (I saw numbers yesterday, Most Negative numbers come from people that did not use it.

As people, even a false reputation may come from old events or gossip, while the person might have changed.

Survey should function like some other awards: Comment or vote only if you tried it in the last year.

Another way used in statistics, is panels: Yes keep the same crowd followed over the years, if they do not like it something, record a comment.

Then, after the survey, make available answers, so anybody can reply to them at beginning or any time during the year until the new survey.

On the following years, People can see new comments as they are posted, giving chance to reevaluate things.

When they come to the new survey, show them again the reason they recorded as well as new comments, and ask them if they tried it again. Count them in the question only if it is based on experience.

Otherwise, their answer is not about last 365 days. Also, people could read and vote for comments, rather than adding one more comment, although they could a complementary comment under a specific main comment or feature.

The survey database would be a great log about evolution of each technology, beyond the summary numbers.

That way, we will know the up to date real reality for each technology, helping make a difference, for each technology, under a continuing same name, not needing to start again on a new name.

4 Likes

I totally agree, the survey should emphasize on “used within the last 365 days” or have another options:

not used within the last 365 days

1 Like

In fact, we want the opinion from those who studied the state of dev of a specific tech. And at the end, we want to see the average score from these people!

3 Likes

Meteor’s problems have started with VCs pulling the rug from under it in 2015, when it was still with MDG. Because, although profitable, it was not becoming the billion dollar business they envisaged.

If anything, Tiny is a better VC, one who seems to be running businesses for profit. It has a real person behind it, Andrew Wilkinson, who is an actual software engineer.

If anyone has doubts about Andrew’s optics about how to run a business, please read https://medium.com/@awilkinson/fire-in-a-crowded-forest-441842c2e0ed and maybe a few more posts.

I’m not questioning his optics or ability to run a business, after all I don’t know him at all.

What I question is that a VC in general is a good project sponsor for an open source project.

Tiny Capital is in here because of Galaxy, that’s how they make money. Felipe is employed by them to get more business that use Meteor in development to run their apps on Galaxy. That’s it.

It would be best to release Meteor development and let it be run completely by private and corporate investors backing up the development of it, Tiny Capital can keep Galaxy and it’s revenue from it.

Was Statistics part of your studies as well? It was in my study of Computer Science with Business Administration.

But to help you it seems you’re overlooked the headline and the subsequent explanation, this is from the same article:

" A good maximum sample size is usually 10% as long as it does not exceed 1000

A good maximum sample size is usually around 10% of the population, as long as this does not exceed 1000. For example, in a population of 5000, 10% would be 500. In a population of 200,000, 10% would be 20,000. This exceeds 1000, so in this case the maximum would be 1000.

Even in a population of 200,000, sampling 1000 people will normally give a fairly accurate result. Sampling more than 1000 people won’t add much to the accuracy given the extra time and money it would cost."

Hope I could help and yes, I’ve worked many years as a Data Scientist so I performed statistical sounds samples for the better part of 10 years.