If I'm reading this right, then the real big benefit of these things would be solid state magnetic storage.
The benefit of these things is they don't create a magnetic field while they do respond to magnetic fields. That means you can pretty tightly pack these things together without concern that they'll interact with each other. A light electric pulse could determine if the bit is a 1 or a zero and a strong pulse would flip the bit.
I'm guessing that due to this nature, these things would actually have pretty long shelf lives and near infinite read/write cycles since you are, effectively, just flipping atoms around and not actually breaking structures or dumping in charge.
These should mostly work with regular silicon manufacturing. The tricky part will be how tightly you can pack these things together before the reading structures start interfering with each other.
imdsm 1 days ago [-]
> A light electric pulse could determine if the bit is a 1 or a zero and a strong pulse would flip the bit.
Feynman moment. Breaking it down into one sentence. Bravo!
OhNoNotAgain_99 24 hours ago [-]
That line is worth a dollar of subscription,
willlma 1 minutes ago [-]
I had the same thought with a recent NewScientist article. Signed up, read it, and tried to cancel to avoid the recurring fee. There's no click to cancel; I had to submit a form + request and repeatedly check my email to see if my request had been honored. I'm still waiting for the perfect pay-per-article platform to show up.
blks 13 hours ago [-]
[flagged]
pedro_caetano 23 hours ago [-]
> If I'm reading this right, then the real big benefit of these things would be solid state magnetic storage.
Wouldn't this also enable a much higher resolution and better noise immunity for the entire zoo of industry sensors that are based on the Hall effect?
That's kind of what the article is talking about. You can do it in a ferromagnetic material (like in your link) but then you've got the problem that it's a magnet and screwing up everything around it.
The idea here is that you can do it in something that is overall neutral.
fc417fc802 11 hours ago [-]
> these things would actually have pretty long shelf lives and near infinite read/write cycles since you are, effectively, just flipping atoms around and not actually breaking structures or dumping in charge.
Didn't we already play this exact scenario out with 3D XPoint? Is there any expectation that this new type of magnet will be able to compete with flash memory on a unit cost basis?
Actually I guess data retention when unpowered was one year or less but still, is that factor ultimately what led to its discontinuation? I doubt it. For long term data retention you're competing with magnetic tape which has a shelf life measured in decades and an impressively low cost per bit.
vanderZwan 10 hours ago [-]
> One of Šmejkal’s favourite pieces is Horseman, a striking picture that features an elaborate, tessellating series of mounted figures. Strangely enough, it was this piece that inspired him to predict the existence of an entirely new kind of magnetism.
I find it a little annoying that they don't show the actual artwork (although they do link to a page with it[0]), and give a description that does not really capture what the image conveys. Because upon seeing it, it immediately becomes obvious how that might inspire someone who thinks about electromagnetic fields all day. Well, obvious to people with some passing familiarity with electromagnetic fields at least.
I'm assuming copyright got in the way but even then they could have added an equivalent illustration of their own.
That museum (Escher in het Paleis) is worth visiting if you get the chance.
vanderZwan 3 hours ago [-]
I'm Dutch and I have a bachelor's degree in arts, so it probably won't surprise you that have :). And yes, it is!
vanderZwan 9 hours ago [-]
> In 2024, researchers led by Atasi Chakraborty, a member of Šmejkal’s research group, demonstrated that applying compressive strain to rhenium dioxide – long known to be an antiferromagnet – triggers a transition into an altermagnetic state.
> What’s more, a trio of researchers at the Beijing Institute of Technology in China realised that you can also create the right internal magnetic disturbances by stacking an antiferromagnet between layers of a different material, like a sandwich.
Does anyone else find it odd that they do not name the authors of the paper[0] that showed the second discovery? (Yichen Liu, Junxi Yu, and Cheng-Cheng Liu, for the record).
Thank you!! I didn't see the enlarged version of the image and was wondering if that had made a mistake.
card_zero 10 hours ago [-]
It's a New Scientist diagram, so it's still going to be wrong somehow. We have the quantum "up" and "down", which are not actual directions. The diagram has arrows pointing up or down. Redundantly, every atom with an up arrow is red and every atom with a down arrow is blue. The article also speaks of "enough atoms with magnetic moments pointing in the same direction" to create a strong field, which I think is a literal direction. Then it speaks of "magnetic arrows", presumably a term invented for the article which could mean anything. Then it also speaks of "rotated atoms". Does an atom have a direction? The diagram has oval blobs around the atoms. These rotate alternatingly in the altermagnet. But the arrows don't rotate. So what are the blobs, and what are the arrows?
rollcat 8 hours ago [-]
Popular science for you. To be able to explain something, is to have a good understanding of it. To explain something in simple terms, takes much more.
jasonthorsness 1 days ago [-]
The article does a decent job eventually of explaining a use-case in the section "Confirming that altermagnets exist".
Seems you can store information at high density in electron spin in materials where spins are naturally organized. However, so far the only suitable materials have been ferromagnets, which have macroscopic magnetic fields that make using them a nightmare. The new altermagnets have suitably organized spins but the atoms alternate their magnetic fields so there is no net magnetism from the material and they are easier to work with.
physicsguy 12 hours ago [-]
I did my PhD in this area. It’s very difficult to “read” information stored when there is no net magnetism. Hard drives are built using a read head that detects the stray field from ferromagnetic materials.
When there are particular topological textures like skyrmions you do get a spin hall effect which is quantised but in applying currents you also change the magnetisation through something called spin transfer torque.
vanderZwan 2 hours ago [-]
So any read operation would likely have to be a read-write operation in practice, essentially?
Animats 24 hours ago [-]
"However, researchers tend to feel that these clever tricks may not lead to scalable altermagnets anytime soon, as the methods are difficult to pull off."
This is a good scientific discovery, if replicated, but the hype drowns out the science.
dylan604 23 hours ago [-]
But that's different from a lot of announcements where after reading, you are left with the impression a viable product is right around the corner
taeric 18 hours ago [-]
I like how the headline makes it sound like we may have had an explosion of new magnet types for a while, and then that stopped for nearly a century. :D
ggm 23 hours ago [-]
Analogous beliefs about new RAM models up-ending the market for memory turned out to be far slower to hit the streets, and had far less impact on the DRAM market.
I would love to have something at modern memory speed, which behaved like core: Turn off the machine, its run-state is frozen. Turn back on, the memory state is still there.
But the reality is that machines are built to DRAM, and DRAM persists as the basic memory model for the "architecture" of a system
pjc50 7 hours ago [-]
FRAM exists, but is way behind the miniaturization curve, and I think is only available from TI?
codedokode 23 hours ago [-]
You can save memory after turning the system off, and swap pages in after turning on.
cgannett 23 hours ago [-]
even if its not faster than DRAM I wonder at what point would it just not make sense to have two separate modules. Like all the volitile storage in a system could be in the L-cache/s on the CPU and for most things that plus this faster non-volitile storage would do the trick. I wonder what kind of optimizations would have been made in a world where stuff like DRAM just didnt exist and we just had to deal with the bottlenecks of non-volitile storage media
coredog64 19 hours ago [-]
Sounds like the era when we mainly transitioned to solid state storage. When spinning rust was the primary technology and RAM prices were high, you could buy hybrid drives that had small amounts of solid state cache (mainly intended for common OS files). And even today, all but the cheapest SSDs will have onboard DRAM for better write speeds.
KingLancelot 17 hours ago [-]
[dead]
Tossrock 18 hours ago [-]
Weird that there was no mention of paramagnetism/diamagnetism in here.
degamad 12 hours ago [-]
The phys.org link does have this relevant statement as to why those two are a different type of thing:
> Although other types of magnetism, such as diamagnetism and paramagnetism have been categorized, these describe specific responses to externally applied magnetic fields rather than spontaneous magnetic orderings in materials.
searine 24 hours ago [-]
Brought to you by the Tax Payers of Czechia, Germany, and the EU via :
* Czech Science Foundation
* The Ministry of Education of the Czech Republic
* European Research Council
* Deutsche Forschungsgemeinschaft (German Research Foundation)
pbhjpbhj 22 hours ago [-]
Which raises the question -- Norway famously made its citizen fund using oil money, but which country has made the most for it's citizens from [technology] IP?
Natural resources v IP resources?
pjc50 7 hours ago [-]
> but which country has made the most for it's citizens from [technology] IP?
Post-1950 this is unambiguously the United States.
pbhjpbhj 3 hours ago [-]
What's specific government/country owned IP?
Just what you think the most lucrative is.
mNovak 1 hours ago [-]
Not only the research, but the deployment of the GPS constellation comes to mind. [1] gives an estimated economic benefit of $1.4T, in the US alone.
Norway basically. Nearly every other country just throws the money back into the general fund or accumulates it at the top.
wongarsu 7 hours ago [-]
Saudi Arabia would be the other noteworthy example. Acquired Saudi Aramco in the 70s/80s, and later diversified with projects like their (in)famous Public Investment Fund
aaron695 18 hours ago [-]
[dead]
eccentricwind 21 hours ago [-]
If It's not all hype and hyperbole, It could be a massive breakthrough in data storage, but It's better to remain prudent and reserve some doubt.
Chief_Searcha 20 hours ago [-]
Yep that's pretty much my take on any scientific article. It's really hard to tell which of these things you'll hear about 10 years down the line.
Doesn't this imply a 4th type with alternating rotated atoms and aligned magnetic spin? Also seems like you could mix and match (making the effect continuously tunable at macro scale).
hammock 20 hours ago [-]
Yea they mentioned it in the article. Antialtermagnetism
godelski 24 hours ago [-]
> In a paper that hasn’t yet been peer-reviewed, he and his colleagues predict the existence of yet another kind of magnetism, which he calls antialtermagnetism.
Can we stop referring to ArXiv papers third way? And for the love of God, just link the fucking abstract, never NEVER link the html![0] You just change {html,pdf} -> abs
We shouldn't say "not peer reviewed" because it isn't accurate. Being published in a journal doesn't mean a work is correct nor does it mean peers read it. Putting the paper on ArXiv does mean peers are reviewing it. The point of publishing is to communicate our work to others and journals and conferences can often be harmful to that process, making researchers oversell or even avoid looking in certain directions because a few opinionated peers shoot them down. It's happened to Nobel level works too.
The review process is just fucked up. It might be able to tell you if a paper is wrong but it can't tell you if it has no mistakes or is right. I mean it took two years to confirm this one, right? (Physical validation) but the way we say "hasn't been peer reviewed" implies that if it has been published in a journal then it's factual. That's not how it works and frankly that's not how it should work.
On top of that they take money from the government, gets articles for free, don't pay reviewers (meaning the universities pay for reviewers), and have the audacity to charge people to read that work. It's basically just a scheme to extract government money.
Sorry, I really just hate the publication process. It stifles innovation and wastes so many people's time
That’s because we’ve basically reinterpreted what “peer review” is.
Peer review used to mean “some peers have reviewed it”, mainly the editors, who pushed for correctness and novelty. There was a clear difference between publishing and making a paper public. It never meant “it’s right”, but it meant “it has passed basic quality control and it’s worth your time to read it”.
Modern day academics push people to fragment into ever smaller niches, meaning most editors are nowadays completely out of their depth when evaluating papers, so now we keep referring to editor approval as “peer review” and try to diminish the public perception that comes with it.
tensor 21 hours ago [-]
This is not true. In most of the top journals you need at least three other practitioners in your field to read it and sign off on it. The editor finds the appropriate reviewers, manages the process, does some basic format and other types of vetting, and also will accept or reject it based on the reviews from the reviewers.
The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.
godelski 19 hours ago [-]
> This is not true. In most of the top journals you need at least three other practitioners in your field to read it and sign off on it.
You're misreading xondono as well as me. I think your idea of what peer review is (in practice) is too idealized.
The problem is the word "expert". We're using it to mean different things, and the difference is important. Despite it appearing that way, "expert" is not a binary condition. It is a spectrum. Where along the spectrum requires context to determine the threshold. Ours (xondono, correct me if I misinterpreted), is higher than the one you're using.
Finding appropriate reviewers is a non-trivial task, which is kinda the entire problem. You can have a PhD in machine learning and that does not mean you're qualified to review another machine learning paper. I know, because I've told ACs I'm not qualified for certain works!
The problem is that what is being published is new knowledge. I'll refer to the (very very short) "illustrated guide to a Ph.D." How many people are qualified to determine if that knowledge is new? It's probably a lot fewer than you think. Let's go back to ML. Let's say your PhD and all your work is in Vision Transformers. Does that mean you're qualified to evaluate a paper on diffusion models? Truth is, probably not. Hell, there's been papers I've reviewed where I'm literally 1 of 2 people in the world who are the appropriate reviewers (the other is the main author of the paper we wrote that's being extended).
Hell, most people working on diffusion aren't even qualified to properly evaluate every diffusion paper! Here's a great example, where this work is more on the mathy side of diffusion models and you can look at the reviews[1]. Reviews are 6 (Weak Accept), 9 (Very Strong Accept), 8 (Strong Accept), 8, 6. Reviewer confidence was even low: 2, 4, 3, 3, 4, respectively (out of 5), and confidence is usually over stated.
Mind you, this is the #1 ML conference and these reviews are post rebuttal. There were over 13000 people reviewing that year[2] and they couldn't get people who had 5/5 confidence. This is even for a paper written by 2 top researchers at a top institution...
> The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.
So no. They are "expert" when compared to the general public, but not necessarily "expert" in context to the paper being reviewed.
I hope the physical evidence is enough to convince you, because honestly this is quite common and there's a viewing bias. Most of the time we don't have this data for works that were rejected. But there's plenty of works that were accepted that you can see this. Not to mention (as stated in my original comment), multiple extremely influential works (worthy of a Nobel Prize) have been rejected. Here's a pretty famous example, where it had both been rejected for being "too trivial" (twice) as well as "obviously incorrect."[3] Yet, it resulted in a Nobel and is one of the most cited works in the field. Doesn't sound like these reviews helped the paper become better, sounds more like it was just wasting time.
> You're misreading xondono as well as me. I think your idea of what peer review is (in practice) is too idealized.
I think it is you who has an ideological axe to grind and is missing the forest for the trees (in this case the practical benefits for the drawbacks). Of course the process isn't perfect. Of course it's a spectrum. That's precisely how journals end up with reputations.
If you don't want to play the reputational game, fine, self publish on your website. Protocols such as ipfs and centralized archives such as arxiv make that easier than ever. But just because you choose to reject a process doesn't mean that it isn't of benefit to other people. And it should go without saying that just because something is of benefit to me (in this case as a reader) doesn't mean that it isn't also flawed in some way.
godelski 19 hours ago [-]
I pretty much agree with you but wanted to nitpick this part
> mainly the editors, who pushed for correctness and novelty.
I don't want to use the word correctness here[0], because no one checks if the work is correct. Rather, I'd say the goal is to check for wrongness. A peer reviewer cannot determine if a work is correct simply by reading it. The only way to do this is replication or by extension (which is the case of the work here. The physical verification was an extension of the earlier work). It's important to make this distinction because, as you say, it doesn't mean the work is right. Nor does it even mean the the readers think it is right.
In the past, many journals published as long as they did not think there were serious errors and were not plagiarized. Editing is completely different, where we want to make sure works are communicated correctly.
But I purposefully didn't say "novelty"
It is a trash word that means nothing. The original intent was that work wasn't redone. That you can't go in and take credit for discovering something someone else did, which we'd cal plagiarism. You could change all the words and still plagiarize.
It is VERY easy to find problems/limitations with works. All works have limitations. All works are incomplete. But are these reasons to reject? Often, no... You see the same thing on HN and it's a classic bias of STEM people. Hyperfixate on the issues. We're trained to, because that's the first step to solving problems! But that's not what matters in publishing, because we're not trying to solve all problems. We do it iteratively! It also runs counter to quickly publishing ("publish or perish") as what, you want to wait to publish till we got a grand theory of everything? And don't get me started on how bad we are at predicting impact of works and how impact often runs counter to the status quo (you can't paradigm shift by maintaining the paradigm). So we don't explore...
AND very frequently, we DO NOT WANT novelty in science. Sounds strange, but it is *critical* to science existing.
- Our goal is to figure out how things work. The causal structure of things. So this means works need to be reproducible. We *want* reproductions, but we also don't want them ad infinitum.
- We *also* want to find other ways to derive the same thing. Some reviewers will consider this novel while others won't, typically inversely related to their expertise in the field (more expert = more likely to consider novel while less expert means you can't see the nuanced differences which are important).
This greatly stifles innovation and reduces how well papers communicate their ideas.
The problem here is as we advance, nuances matter more and more. Think of it as with approximations. Calculating the first order term is usually computationally easy, with computation exponentially increasing as the order of accuracy increases. The nuances start to dominate. But by focusing on "novelty" (rather than plagiarism) we face the exact problem you mention.
> most editors are nowadays completely out of their depth when evaluating papers,
So authors end up just making their works look more convoluted, to look more impressive and make it look less like the work that they are building on top of. High experts can see right through this and as grad students usually groan but then just become accustomed to the shit and start doing the same thing. Because, non-niche experts cannot differentiate the work that's being built upon from the new work.
It is a self-inflicted problem. As editors/reviewers we think we're doing right, but we're too dumb to see the minute (but important) differences. As authors we're just trying to get published, keep our jobs, and it's not exactly like the reviewers are "wrong". But it often just becomes a chase and does nothing to help make the papers actually better. This gets even worse with targeted acceptance rates, as it incentivizes reviewers to reject and be less nuanced. Which they're already incentivized to do because there's just so much stress and time crunch to the job anyways (including needing to rewrite papers because others did exactly this).
The targeted acceptance rates are just silly and we see the absurdity in domains like Machine Learning[1]. We have an exponentially increasing number of papers to review each year. This isn't just because there are new works, but because works are being resubmitted. Most of these conferences have 30% acceptance rates but the number of "wrong" papers is not that low. We also know the acceptance rate is very noisy for the majority of papers, where a different set of reviewers would result in a different outcome (see the multiple "NeurIPS experiment"s). You can do an easy model to see why this is bad. It just leads to more papers and if the number of reviewers stays the same, this is more reviews that need to be done per reviewer, which just exacerbates the problem. If you have 1000 fixed papers submitted each year and even a low percent of rejected works resubmitting the next year, like 10%, you actually have to review ~1075 papers. With a more realistic ~50% of rejected works getting recycled, you need to actually review ~1500 per year. Most serious authors will try a few times, and it is a common to say "just keep trying".
We don't have to do this to ourselves... It helps no one, and actually harms everyone. So... why? What are we gaining?
It's just so fucking stupid
/rant (have we even started?)
[0] I'm pretty sure we're going to agree, but we're talking in public and want to make sure we communicate with the public. Tbh, even many scientists think "correctness" is the same as "is correct"
[1] It is extra bad because the primary publishing venue is conferences. To you submit, get a review (usually 3), get to do a rebuttal (often 1 page max), and then the final decision is made. There is no real discussion so you have no real chance to explain things to near-niche experts. Worse with acceptance deadlines and overlapping deadlines between conferences. It is better in other domains since journals have conversations, but some of these problems still exist.
olddustytrail 21 hours ago [-]
> That’s because we’ve basically reinterpreted what “peer review” is.
Who is "we" in this scenario? Because that's certainly not how I've seen peer review work.
The editor would ask a small group of people in the field to act as reviewers and then send them the papers. They review it and send it back with any requests for changes prior to publication.
So they're the peers that are reviewing, not the editor.
godelski 18 hours ago [-]
Look at the history of Peer Review. What you see post 1950 is pretty different than what you see prior to that. I think this quote is the best one-liner, though I think everyone should dig much more into the question
> in the early 20th century, "the burden of proof was generally on the opponents rather than the proponents of new ideas.
That is, the reviewers had a higher burden than the authors. The bias is towards acceptance rather than rejection. In a perfect world we could only accept good papers and could reject bad papers, but we don't live in that world. So the question is "when we fail, which way do we want to fail?" Obviously, I'm on the side of Blackstone here
But that's the opposite of what the person I'm replying to said. They're saying everything is acceptable and I'm saying it's actually reviewed.
Ok, maybe that's not what you meant. Peer review doesn't reject papers because they don't agree with the orthodoxy; they reject them because they're not competent. Is that what you were getting at?
godelski 15 hours ago [-]
I'm a bit confused. What you described is what happens today. Yes. That has been my experience too, serving as a reviewer. I understood xondono to be referencing the history that I mentioned, which is where these reviewers didn't exist. So the requirement was different, which is what I'm saying about the burden of proof.
> Peer review doesn't reject papers because they don't agree with the orthodoxy; they reject them because they're not competent.
This is absolutely false and I don't know a single academic who hasn't seen competent papers get rejected.
Reviewers can reject for any reason. The system is built on trust, but incentivized to reject. "Not competent" is too vague of a term, just like "not novel."
In my other comment[0] I even reference one of the famous works that got rejected 3 times for being "not competent". This isn't a one-off case here, it is a common occurrence. On several occasions I've had to champion papers which were clearly competent yet my fellow reviewers simply were not familiar with the domain (they admitted this during discussion). I've also killed papers for similar reasons (a very rare event as I strongly bias towards accepting).
So I'm sorry, saying papers are only rejected because they are "not competent" is incredibly naive.
And I'm sorry, but the claim that "works aren't rejected because they don't agree with orthodoxy" is simply laughable. There's a long history of peers rejecting discoveries that upset the norms. This has happened to the majority of well known scientists. I'm not talking about like the Church going after Galileo, I'm talking about things like Galilelo arguing with Tycho Brahe or Christoph Scheiner. Einstein was critical of Bohr. Hertz was critical of Bell. The list goes on and on. The criticism was explicitly about running counter to orthodoxy. This is such a common thread in history that there's even Max Planck stated "Science advances one funeral at a time."
I think they meant "reinterpreted" over the last century, not over the span of your personal experience and career.
olddustytrail 17 hours ago [-]
They're saying it changed to not being reviewed properly and I'm saying from recent experience that it is.
gus_massa 24 hours ago [-]
Nitpicking:
> Putting the paper on ArXiv does mean peers are reviewing it.
You can upload a pdf to the ArXiv and not send it to any journal. I think you need an invitation to create an account, but it's 99% like WordPress.
mindcrime 19 hours ago [-]
> I think you need an invitation to create an account,
Anybody can create an account on arXiv.[1][2] However, having an account and uploading are different things. Strictly speaking, you have to be endorsed to submit a paper. But some endorsements are granted automatically, based on various criteria. So not everybody has to go explicitly ask people for an endorsement:
arXiv requires that users be endorsed before submitting their first paper to a category.
arXiv may give some people automatic endorsements based on subject area, topic, previous submissions, and academic affiliation. In most cases, automatic endorsement is given to authors from known academic institutions and research facilities.
Generally speaking, the "word on the street" is that it's as simple as "register with a .edu email address and you get auto endorsed, otherwise you don't." But I think the reality is slightly more complex than that. Although the exact details are kept private, probably at least in part to prevent people from gaming the system.
Note that the page encourages you to register with an "institutional" email address, but doesn't specifically say it has to be a .edu one.
arXiv submitters are therefore encouraged to associate an institutional email address, if they have one, with their arXiv account. This will expedite the endorsement process.
I think you're right to nitpick, but nitpicking the wrong thing.
> and not send it to any journal.
This, doesn't matter.
The main point of my comment is "journal != peers reading the work". Which, of course, you could even say is true about ArXiv (fine to nitpick[0]). You can put on ArXiv and no one will read it or the only people who read it are not peers. You're right that submitting to a journal or conference (nearly) guarantees someone has read the paper.
The thing though is "peer" is weakly defined. I recognize that it is contextually defined, but I'd hope from the context of my comment you can tell that I'm using "peer" to mean "another person well versed in the niche topic of the paper." This is different from "a person well versed in the topic of the paper." The niche matters. I explain more in this comment[1]
Or to put it differently, just because someone read it doesn't mean they /read/ it.
[0] I'll revise the quoted text to "Just because it is on ArXiv doesn't mean peers aren't or haven't reviewed it". Though we can nitpick a little more and say that if there are more than one author on the paper, a peer has been much more likely to have reviewed the work (not all co-authors review the works they are authors on...)
Fully agree with you that the way much (but not all) of the publication process currently works is bad for everyone involved (including the taxpayer).
But I'd like to push back against the following.
> the way we say "hasn't been peer reviewed" implies that if it has been published in a journal then it's factual.
I think that's ascribing meaning that isn't actually there. Peer review _does_ tell you something. Certainly not "guaranteed correct" or even "guaranteed high quality" but it is definitely some sort of signal to the reader. Consider that this is nothing more than proactively pointing out that a linked paper which closely resembles work that would typically be affiliated with a journal (a signal gated by an editor) be peer reviewed (another signal) and those reviews possibly made public (yet another signal) does not have any of these signals. That is to say, it's an important disclaimer in order to avoid the appearance of attempting to deceive the audience.
devmor 24 hours ago [-]
> Being published in a journal doesn't mean a work is correct nor does it mean peers read it. Putting the paper on ArXiv does mean peers are reviewing it.
Sorry I don't get what you mean by this. How does putting the paper on ArXiv mean that peers are reviewing it any more than publishing it in a journal? Both do mean that peers have the opportunity to review it, but neither guarantees it, and ArXiv is infinitely easier to upload anything to and never have it even looked at.
xondono 23 hours ago [-]
Technically speaking, being published means at least the editors have reviewed. The quality of their reviews is another thing entirely.
godelski 18 hours ago [-]
Well... it can be desk rejected. Which I've actually had happen because the paper was already on ArXiv, even though it wasn't against the journal's policy. Took 4 weeks to resolve and then got desk rejected again for "not citing the correct works", with no further information... I don't submit to that journal anymore...
20 hours ago [-]
godelski 18 hours ago [-]
You're right. You can publish into the void.
I was hoping what was said would be easily interpreted but I was incorrect. I'll revise to "putting on ArXiv doesn't mean peers haven't reviewed it."
For what constitutes "a peer", I describe more here[0]. The definition varies wildly, and it is not an easy problem to determine who is even qualified to review a work. Hell, it can be hard to determine if you yourself are qualified to review a work!
While I advocate for mostly abandoning the journals and conferences (or dramatic restructuring), I won't act like there's no problems with just publishing to ArXiv (or more preferably, OpenReview, since it allows commenting). But the truth is that there's no globally optimal solution to this problem. I just think the benefits outweigh the costs here. Frankly, most authors aren't publishing (posting on ArXiv or whatever) in bad faith. If anything, I think our current system incentivizes bad faith publishing, but that's a larger conversation (coupled with this one but requires talking about a few other factors).
Frankly, we just waste a lot of time and effort for little to no gain (possibly even for losses).
That makes your intention much more clear, and I am inclined to agree. A paper should not be dismissed as mere prose on the fact that it was published via ArXiv alone.
tensor 21 hours ago [-]
Obviously journals vary in their standards, but many of the more respected ones do require other scientists read and critique the paper. You can argue about the quality of these reviews, certainly it's a process that needs improving, but this is what "peer reviewed" means.
Trying to claim ArXiv papers are "peer reviewed" is utter nonsense. As you correctly point out, the only requirement to being on ArXiv is that someone with an account uploaded it there. There are no requirements that it passes any sort of verification or vetting process whatsoever, let alone having other scientists read and critique it.
There is a very vocal movement these days that is trying to argue that we should do away with the traditional peer review process. Apparently that sometimes includes trying to redefine the very definition of "peer review" as the OP did.
godelski 18 hours ago [-]
> There is a very vocal movement these days that is trying to argue that we should do away with the ***traditional*** peer review process. Apparently that sometimes includes trying to redefine the very definition of "peer review" as the OP did.
(emphasis my own)
Sorry, but I don't think you have an accurate image. You have it backwards.
"Peer review" is a very new thing. We're talking really about mid 20th century, with this format not really being common until the 70's. If anything, we're trying to return to the earlier version of the system[0].
The older review system didn't have these concepts of acceptance rates, novelty, and all of that. They would mostly publish if your work was void of errors, did not copy others, or was not essentially a small variation of another work and trying to take credit (I'd still consider this plagiarism).
We're fine with having conversations with reviewers, where we view this all as being "on the same team". A team trying to make the work the best work it can be. What we're not fine with is reviewers being set up as adversaries, who are looking for reasons to reject the works. That just creates perverse incentives.
Really, it's all about addressing the question:
What is the point of publishing?
I'd say that the primary goal is to communicate the a work. A secondary goal is to help works be visible. But this form of review cannot validate works and does a bad job at invalidating works. Nor can this form of review determine how impactful a work will be and we have a clear tendency to reject works that end up being highly influential.
We're checking the alignment of our current system with our intended goals. Even if we're wrong, I think it is obtuse to be outright dismissive. Should you not even ask the question? I'd argue that not asking the question is anti-scientific. Science requires challenging the status quo. If we don't, innovation slows to a crawl. You cannot rely on those before you having gotten everything right. Use the advantage of hindsight. Trust, but you must also verify (challenge).
[0] We believe that this is the better formulation. Did it have problems? Yes. Can they all be solved? No. Does a global optima exist where all problems can be solved with a different system? Also no. So the belief is that good intentions trying to solve these problems only ended up creating worse ones. The cure was worse than the poison.
> A 2003 editorial in Nature stated that, in the early 20th century, "the burden of proof was generally on the opponents rather than the proponents of new ideas."
this is an extremely esoteric thing - no net magnetism, but has some possibly-useful properties of atomic spin... useful if you're doing some spintronics, that is. maybe.
BizarroLand 1 days ago [-]
After reading that article I now understand what my boomer parents felt like watching star trek for the first time.
"In condensed matter physics, altermagnetism is a type of persistent magnetic state in ideal crystals. Altermagnetic structures are collinear and crystal-symmetry compensated, resulting in zero net magnetisation. Unlike in an ordinary collinear antiferromagnet, another magnetic state with zero net magnetization, the electronic bands in an altermagnet are not Kramers degenerate, but instead depend on the wavevector in a spin-dependent way due to the intrinsic crystal symmetry connecting different magnetic sublattices."
phatskat 17 hours ago [-]
Reminds me of this quote from the esteemed Leslie Claret
> Sell them on the structure. You can talk about it with confidence. Keep it simple. A little something like this, John. Hey. Let me walk you through the Donnelly nut spacing and crack system rim-riding rip configuration. Using a field of half-C sprats, and brass-fitted nickel slits, our bracketed caps, and splay-flexed brace columns vent dampers to dampening hatch depths of one half meter from the damper crown to the spurve plinths. How? Well, we bolster twelve husk nuts to each girldle-jerry, while flex tandems press a task apparatus of ten vertically composited patch-hamplers. Then, pin-flam-fastened pan traps at both maiden-apexes of the jim-joist. A little something like that, Lakeman.
rockskon 1 days ago [-]
Can it be used to supply inverse reactive current for use in unilateral phase detractors while also being capable of automatically synchronizing cardinal grammeters?
bigbuppo 1 days ago [-]
Well of course, that's why it needs Glyptal-impregnated, cyanoethylated kraft paper bushings. But use caution. The replenerative flow characteristics of positive ions in unilateral phase detractors may require the use of a quasistatic regeneration oscillator in some situations.
WalterBright 1 days ago [-]
That would be TNG. The original Star Trek didn't use <insert technobabble here> in the scripts.
srcnkcl 1 days ago [-]
Reads like Turbo Encabulator... maybe on purpose?
boothby 1 days ago [-]
As somebody with a professional interest in spin lattices: no, it doesn't. (Also: I'm unfamiliar with the term "Kramers degenerate" and am reading up now)
LearnYouALisp 19 hours ago [-]
Honestly I feel it should read "Kramers-degenerate" if it doesn't already
olddustytrail 21 hours ago [-]
> "In condensed matter physics, altermagnetism is a type of persistent magnetic state in ideal crystals.
Real crystals have impurities so they are harder to reason about. An ideal crystal is just one where we pretend it's perfect.
> Altermagnetic structures are collinear
The structure is lined up, like the diagrams in the article
> and crystal-symmetry compensated, resulting in zero net magnetisation.
And the alternating lines are symmetrical so they "compensate" for each other and cancel out.
> the electronic bands in an altermagnet are not Kramers degenerate... (etc.)
The spin is different like in the diagram. Ok, that's a bit lame. Anyone else can give a simple but mostly accurate explanation?
abtinf 1 days ago [-]
The writing on Wikipedia science and math articles tends to be absolutely indecipherable trash.
Trecknobabble often makes more sense than Wikipedia, at least within the context of the show.
theideaofcoffee 1 days ago [-]
As someone formerly in the sciences, I can't suspend my disbelief long enough to make it through a treknobabble explanation, it's just cheesy enough where it's painful. Though I think trash to describe a lot of science and math wiki articles is a bit of a strong word. A lot of them are written by the practitioners or people with intimate knowledge so I'd 100% expect jargon so it can be impenetrable at times, that's where the references come in handy: textbooks, articles and whatnot. A bit opaque yes, but not trash.
olddustytrail 21 hours ago [-]
Not trash as in wrong but trash in utterly useless.
If you have enough knowledge to understand the article then you don't need it because you understand the field. If you don't it's impenetrable.
Perhaps I'm wrong: are there people out there who learnt something from a Wikipedia page on maths because you fell between the two?
spauldo 16 hours ago [-]
I have the same experience. I've got enough math and enough interest that I can follow concepts taught to math undergrads, if it's explained well and I play with it a bit. Wikipedia math articles may as well be line noise to me. Fortunately, there are plenty of other sources I can turn to.
legohead 1 days ago [-]
"the crystal structure results in zero magnetisation."
?
Dylan16807 22 hours ago [-]
The crystal emits no magnetism as a whole, despite the different internal states it can take, because adjacent atoms cancel each other out.
Because each half of the net-zero magnet is arranged differently inside the crystal there's still a good way to measure what state it's in. Or something like that, I can see the pretty graph but I don't know what measurement you'd do.
MengerSponge 1 days ago [-]
This makes me want to write a conference talk "Towards a turbo encabulator"
Basically, any potential discovery that can barely fit the "new kind of..." usually sounds more impressive than it really is.
This article is full of it.
I'm a programmer with very basic knowledge of magnetism, so, I can't say for sure what the discovery means, or if it is a discovery at all.
echelon_musk 20 hours ago [-]
> I'm a programmer with very basic knowledge of magnetism, so, I can't say for sure what the discovery means, or if it is a discovery at all.
Perhaps you should have lead with this.
alganet 19 hours ago [-]
I haven't said anything about magnetism. Read my comment again.
godelski 15 hours ago [-]
I read it, and I don't know where it implies you know anything about states of matter or material physics.
I also read the response, and it seems to be more general than you give credit. Allow me to interpret. "You probably shouldn't have strong opinions on things you don't have a strong expertise in."
alganet 15 hours ago [-]
Duh, it implies I don't know about magnetism, which is something I stated.
What it tries to do is to provoke me to impostor my way into searching about magnetism and then repeating something that I read. And it failed at that. I'm not here to display any prowess in physics knowledge.
godelski 12 hours ago [-]
Wait, you think the article is supposed to do that?
Have you considered you're not the audience?
alganet 10 hours ago [-]
No.
Yes.
card_zero 10 hours ago [-]
Well anyway it can't be "less important than it really is", nothing can.
https://archive.ph/ObokU
If I'm reading this right, then the real big benefit of these things would be solid state magnetic storage.
The benefit of these things is they don't create a magnetic field while they do respond to magnetic fields. That means you can pretty tightly pack these things together without concern that they'll interact with each other. A light electric pulse could determine if the bit is a 1 or a zero and a strong pulse would flip the bit.
I'm guessing that due to this nature, these things would actually have pretty long shelf lives and near infinite read/write cycles since you are, effectively, just flipping atoms around and not actually breaking structures or dumping in charge.
These should mostly work with regular silicon manufacturing. The tricky part will be how tightly you can pack these things together before the reading structures start interfering with each other.
Feynman moment. Breaking it down into one sentence. Bravo!
Wouldn't this also enable a much higher resolution and better noise immunity for the entire zoo of industry sensors that are based on the Hall effect?
The idea here is that you can do it in something that is overall neutral.
Didn't we already play this exact scenario out with 3D XPoint? Is there any expectation that this new type of magnet will be able to compete with flash memory on a unit cost basis?
Actually I guess data retention when unpowered was one year or less but still, is that factor ultimately what led to its discontinuation? I doubt it. For long term data retention you're competing with magnetic tape which has a shelf life measured in decades and an impressively low cost per bit.
I find it a little annoying that they don't show the actual artwork (although they do link to a page with it[0]), and give a description that does not really capture what the image conveys. Because upon seeing it, it immediately becomes obvious how that might inspire someone who thinks about electromagnetic fields all day. Well, obvious to people with some passing familiarity with electromagnetic fields at least.
I'm assuming copyright got in the way but even then they could have added an equivalent illustration of their own.
[0] https://escherinhetpaleis.nl/en/about-escher/escher-today/ho...
[1] https://www.nga.gov/artworks/54229-horseman (alternate link in case the first one doesn't load)
> What’s more, a trio of researchers at the Beijing Institute of Technology in China realised that you can also create the right internal magnetic disturbances by stacking an antiferromagnet between layers of a different material, like a sandwich.
Does anyone else find it odd that they do not name the authors of the paper[0] that showed the second discovery? (Yichen Liu, Junxi Yu, and Cheng-Cheng Liu, for the record).
[0] https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.13...
Seems you can store information at high density in electron spin in materials where spins are naturally organized. However, so far the only suitable materials have been ferromagnets, which have macroscopic magnetic fields that make using them a nightmare. The new altermagnets have suitably organized spins but the atoms alternate their magnetic fields so there is no net magnetism from the material and they are easier to work with.
When there are particular topological textures like skyrmions you do get a spin hall effect which is quantised but in applying currents you also change the magnetisation through something called spin transfer torque.
This is a good scientific discovery, if replicated, but the hype drowns out the science.
I would love to have something at modern memory speed, which behaved like core: Turn off the machine, its run-state is frozen. Turn back on, the memory state is still there.
But the reality is that machines are built to DRAM, and DRAM persists as the basic memory model for the "architecture" of a system
> Although other types of magnetism, such as diamagnetism and paramagnetism have been categorized, these describe specific responses to externally applied magnetic fields rather than spontaneous magnetic orderings in materials.
* Czech Science Foundation * The Ministry of Education of the Czech Republic * European Research Council * Deutsche Forschungsgemeinschaft (German Research Foundation)
Natural resources v IP resources?
Post-1950 this is unambiguously the United States.
Just what you think the most lucrative is.
[1] https://space.commerce.gov/doc-study-on-economic-benefits-of...
https://www.youtube.com/watch?v=D0st_6sE7Bk
Thank you, _Airplane!_
https://youtu.be/0wxp-NxJny8
We shouldn't say "not peer reviewed" because it isn't accurate. Being published in a journal doesn't mean a work is correct nor does it mean peers read it. Putting the paper on ArXiv does mean peers are reviewing it. The point of publishing is to communicate our work to others and journals and conferences can often be harmful to that process, making researchers oversell or even avoid looking in certain directions because a few opinionated peers shoot them down. It's happened to Nobel level works too.
The review process is just fucked up. It might be able to tell you if a paper is wrong but it can't tell you if it has no mistakes or is right. I mean it took two years to confirm this one, right? (Physical validation) but the way we say "hasn't been peer reviewed" implies that if it has been published in a journal then it's factual. That's not how it works and frankly that's not how it should work.
On top of that they take money from the government, gets articles for free, don't pay reviewers (meaning the universities pay for reviewers), and have the audacity to charge people to read that work. It's basically just a scheme to extract government money.
Sorry, I really just hate the publication process. It stifles innovation and wastes so many people's time
[0] https://arxiv.org/abs/2309.01607
Peer review used to mean “some peers have reviewed it”, mainly the editors, who pushed for correctness and novelty. There was a clear difference between publishing and making a paper public. It never meant “it’s right”, but it meant “it has passed basic quality control and it’s worth your time to read it”.
Modern day academics push people to fragment into ever smaller niches, meaning most editors are nowadays completely out of their depth when evaluating papers, so now we keep referring to editor approval as “peer review” and try to diminish the public perception that comes with it.
The reviewers here are the "peers", and generally are expected to be qualified experts in the area that the paper deals with.
The problem is the word "expert". We're using it to mean different things, and the difference is important. Despite it appearing that way, "expert" is not a binary condition. It is a spectrum. Where along the spectrum requires context to determine the threshold. Ours (xondono, correct me if I misinterpreted), is higher than the one you're using.
Finding appropriate reviewers is a non-trivial task, which is kinda the entire problem. You can have a PhD in machine learning and that does not mean you're qualified to review another machine learning paper. I know, because I've told ACs I'm not qualified for certain works!
The problem is that what is being published is new knowledge. I'll refer to the (very very short) "illustrated guide to a Ph.D." How many people are qualified to determine if that knowledge is new? It's probably a lot fewer than you think. Let's go back to ML. Let's say your PhD and all your work is in Vision Transformers. Does that mean you're qualified to evaluate a paper on diffusion models? Truth is, probably not. Hell, there's been papers I've reviewed where I'm literally 1 of 2 people in the world who are the appropriate reviewers (the other is the main author of the paper we wrote that's being extended).
Hell, most people working on diffusion aren't even qualified to properly evaluate every diffusion paper! Here's a great example, where this work is more on the mathy side of diffusion models and you can look at the reviews[1]. Reviews are 6 (Weak Accept), 9 (Very Strong Accept), 8 (Strong Accept), 8, 6. Reviewer confidence was even low: 2, 4, 3, 3, 4, respectively (out of 5), and confidence is usually over stated.
Mind you, this is the #1 ML conference and these reviews are post rebuttal. There were over 13000 people reviewing that year[2] and they couldn't get people who had 5/5 confidence. This is even for a paper written by 2 top researchers at a top institution...
So no. They are "expert" when compared to the general public, but not necessarily "expert" in context to the paper being reviewed.I hope the physical evidence is enough to convince you, because honestly this is quite common and there's a viewing bias. Most of the time we don't have this data for works that were rejected. But there's plenty of works that were accepted that you can see this. Not to mention (as stated in my original comment), multiple extremely influential works (worthy of a Nobel Prize) have been rejected. Here's a pretty famous example, where it had both been rejected for being "too trivial" (twice) as well as "obviously incorrect."[3] Yet, it resulted in a Nobel and is one of the most cited works in the field. Doesn't sound like these reviews helped the paper become better, sounds more like it was just wasting time.
[0] https://matt.might.net/articles/phd-school-in-pictures/
[1] https://openreview.net/forum?id=NnMEadcdyD
[2] https://media.neurips.cc/Conferences/NeurIPS2024/NeurIPS2024...
[3] https://en.wikipedia.org/wiki/The_Market_for_Lemons#Critical...
I think it is you who has an ideological axe to grind and is missing the forest for the trees (in this case the practical benefits for the drawbacks). Of course the process isn't perfect. Of course it's a spectrum. That's precisely how journals end up with reputations.
If you don't want to play the reputational game, fine, self publish on your website. Protocols such as ipfs and centralized archives such as arxiv make that easier than ever. But just because you choose to reject a process doesn't mean that it isn't of benefit to other people. And it should go without saying that just because something is of benefit to me (in this case as a reader) doesn't mean that it isn't also flawed in some way.
In the past, many journals published as long as they did not think there were serious errors and were not plagiarized. Editing is completely different, where we want to make sure works are communicated correctly.
But I purposefully didn't say "novelty"
It is a trash word that means nothing. The original intent was that work wasn't redone. That you can't go in and take credit for discovering something someone else did, which we'd cal plagiarism. You could change all the words and still plagiarize.
It is VERY easy to find problems/limitations with works. All works have limitations. All works are incomplete. But are these reasons to reject? Often, no... You see the same thing on HN and it's a classic bias of STEM people. Hyperfixate on the issues. We're trained to, because that's the first step to solving problems! But that's not what matters in publishing, because we're not trying to solve all problems. We do it iteratively! It also runs counter to quickly publishing ("publish or perish") as what, you want to wait to publish till we got a grand theory of everything? And don't get me started on how bad we are at predicting impact of works and how impact often runs counter to the status quo (you can't paradigm shift by maintaining the paradigm). So we don't explore...
AND very frequently, we DO NOT WANT novelty in science. Sounds strange, but it is *critical* to science existing.
- Our goal is to figure out how things work. The causal structure of things. So this means works need to be reproducible. We *want* reproductions, but we also don't want them ad infinitum.
- We *also* want to find other ways to derive the same thing. Some reviewers will consider this novel while others won't, typically inversely related to their expertise in the field (more expert = more likely to consider novel while less expert means you can't see the nuanced differences which are important).
This greatly stifles innovation and reduces how well papers communicate their ideas.
The problem here is as we advance, nuances matter more and more. Think of it as with approximations. Calculating the first order term is usually computationally easy, with computation exponentially increasing as the order of accuracy increases. The nuances start to dominate. But by focusing on "novelty" (rather than plagiarism) we face the exact problem you mention.
So authors end up just making their works look more convoluted, to look more impressive and make it look less like the work that they are building on top of. High experts can see right through this and as grad students usually groan but then just become accustomed to the shit and start doing the same thing. Because, non-niche experts cannot differentiate the work that's being built upon from the new work.It is a self-inflicted problem. As editors/reviewers we think we're doing right, but we're too dumb to see the minute (but important) differences. As authors we're just trying to get published, keep our jobs, and it's not exactly like the reviewers are "wrong". But it often just becomes a chase and does nothing to help make the papers actually better. This gets even worse with targeted acceptance rates, as it incentivizes reviewers to reject and be less nuanced. Which they're already incentivized to do because there's just so much stress and time crunch to the job anyways (including needing to rewrite papers because others did exactly this).
The targeted acceptance rates are just silly and we see the absurdity in domains like Machine Learning[1]. We have an exponentially increasing number of papers to review each year. This isn't just because there are new works, but because works are being resubmitted. Most of these conferences have 30% acceptance rates but the number of "wrong" papers is not that low. We also know the acceptance rate is very noisy for the majority of papers, where a different set of reviewers would result in a different outcome (see the multiple "NeurIPS experiment"s). You can do an easy model to see why this is bad. It just leads to more papers and if the number of reviewers stays the same, this is more reviews that need to be done per reviewer, which just exacerbates the problem. If you have 1000 fixed papers submitted each year and even a low percent of rejected works resubmitting the next year, like 10%, you actually have to review ~1075 papers. With a more realistic ~50% of rejected works getting recycled, you need to actually review ~1500 per year. Most serious authors will try a few times, and it is a common to say "just keep trying".
We don't have to do this to ourselves... It helps no one, and actually harms everyone. So... why? What are we gaining?
It's just so fucking stupid
/rant (have we even started?)
[0] I'm pretty sure we're going to agree, but we're talking in public and want to make sure we communicate with the public. Tbh, even many scientists think "correctness" is the same as "is correct"
[1] It is extra bad because the primary publishing venue is conferences. To you submit, get a review (usually 3), get to do a rebuttal (often 1 page max), and then the final decision is made. There is no real discussion so you have no real chance to explain things to near-niche experts. Worse with acceptance deadlines and overlapping deadlines between conferences. It is better in other domains since journals have conversations, but some of these problems still exist.
Who is "we" in this scenario? Because that's certainly not how I've seen peer review work.
The editor would ask a small group of people in the field to act as reviewers and then send them the papers. They review it and send it back with any requests for changes prior to publication.
So they're the peers that are reviewing, not the editor.
https://en.wikipedia.org/wiki/Scholarly_peer_review#History
Ok, maybe that's not what you meant. Peer review doesn't reject papers because they don't agree with the orthodoxy; they reject them because they're not competent. Is that what you were getting at?
Reviewers can reject for any reason. The system is built on trust, but incentivized to reject. "Not competent" is too vague of a term, just like "not novel."
In my other comment[0] I even reference one of the famous works that got rejected 3 times for being "not competent". This isn't a one-off case here, it is a common occurrence. On several occasions I've had to champion papers which were clearly competent yet my fellow reviewers simply were not familiar with the domain (they admitted this during discussion). I've also killed papers for similar reasons (a very rare event as I strongly bias towards accepting).
So I'm sorry, saying papers are only rejected because they are "not competent" is incredibly naive.
And I'm sorry, but the claim that "works aren't rejected because they don't agree with orthodoxy" is simply laughable. There's a long history of peers rejecting discoveries that upset the norms. This has happened to the majority of well known scientists. I'm not talking about like the Church going after Galileo, I'm talking about things like Galilelo arguing with Tycho Brahe or Christoph Scheiner. Einstein was critical of Bohr. Hertz was critical of Bell. The list goes on and on. The criticism was explicitly about running counter to orthodoxy. This is such a common thread in history that there's even Max Planck stated "Science advances one funeral at a time."
[0] https://news.ycombinator.com/item?id=44587535
> Putting the paper on ArXiv does mean peers are reviewing it.
You can upload a pdf to the ArXiv and not send it to any journal. I think you need an invitation to create an account, but it's 99% like WordPress.
Anybody can create an account on arXiv.[1][2] However, having an account and uploading are different things. Strictly speaking, you have to be endorsed to submit a paper. But some endorsements are granted automatically, based on various criteria. So not everybody has to go explicitly ask people for an endorsement:
arXiv requires that users be endorsed before submitting their first paper to a category.
arXiv may give some people automatic endorsements based on subject area, topic, previous submissions, and academic affiliation. In most cases, automatic endorsement is given to authors from known academic institutions and research facilities.
Generally speaking, the "word on the street" is that it's as simple as "register with a .edu email address and you get auto endorsed, otherwise you don't." But I think the reality is slightly more complex than that. Although the exact details are kept private, probably at least in part to prevent people from gaming the system.
Note that the page encourages you to register with an "institutional" email address, but doesn't specifically say it has to be a .edu one.
arXiv submitters are therefore encouraged to associate an institutional email address, if they have one, with their arXiv account. This will expedite the endorsement process.
[1]: https://info.arxiv.org/help/registerhelp.html
[2]: https://arxiv.org/user/register
The main point of my comment is "journal != peers reading the work". Which, of course, you could even say is true about ArXiv (fine to nitpick[0]). You can put on ArXiv and no one will read it or the only people who read it are not peers. You're right that submitting to a journal or conference (nearly) guarantees someone has read the paper.
The thing though is "peer" is weakly defined. I recognize that it is contextually defined, but I'd hope from the context of my comment you can tell that I'm using "peer" to mean "another person well versed in the niche topic of the paper." This is different from "a person well versed in the topic of the paper." The niche matters. I explain more in this comment[1]
Or to put it differently, just because someone read it doesn't mean they /read/ it.
[0] I'll revise the quoted text to "Just because it is on ArXiv doesn't mean peers aren't or haven't reviewed it". Though we can nitpick a little more and say that if there are more than one author on the paper, a peer has been much more likely to have reviewed the work (not all co-authors review the works they are authors on...)
[1] https://news.ycombinator.com/item?id=44587535
But I'd like to push back against the following.
> the way we say "hasn't been peer reviewed" implies that if it has been published in a journal then it's factual.
I think that's ascribing meaning that isn't actually there. Peer review _does_ tell you something. Certainly not "guaranteed correct" or even "guaranteed high quality" but it is definitely some sort of signal to the reader. Consider that this is nothing more than proactively pointing out that a linked paper which closely resembles work that would typically be affiliated with a journal (a signal gated by an editor) be peer reviewed (another signal) and those reviews possibly made public (yet another signal) does not have any of these signals. That is to say, it's an important disclaimer in order to avoid the appearance of attempting to deceive the audience.
Sorry I don't get what you mean by this. How does putting the paper on ArXiv mean that peers are reviewing it any more than publishing it in a journal? Both do mean that peers have the opportunity to review it, but neither guarantees it, and ArXiv is infinitely easier to upload anything to and never have it even looked at.
I was hoping what was said would be easily interpreted but I was incorrect. I'll revise to "putting on ArXiv doesn't mean peers haven't reviewed it."
For what constitutes "a peer", I describe more here[0]. The definition varies wildly, and it is not an easy problem to determine who is even qualified to review a work. Hell, it can be hard to determine if you yourself are qualified to review a work!
While I advocate for mostly abandoning the journals and conferences (or dramatic restructuring), I won't act like there's no problems with just publishing to ArXiv (or more preferably, OpenReview, since it allows commenting). But the truth is that there's no globally optimal solution to this problem. I just think the benefits outweigh the costs here. Frankly, most authors aren't publishing (posting on ArXiv or whatever) in bad faith. If anything, I think our current system incentivizes bad faith publishing, but that's a larger conversation (coupled with this one but requires talking about a few other factors).
Frankly, we just waste a lot of time and effort for little to no gain (possibly even for losses).
[0] https://news.ycombinator.com/item?id=44587535
Trying to claim ArXiv papers are "peer reviewed" is utter nonsense. As you correctly point out, the only requirement to being on ArXiv is that someone with an account uploaded it there. There are no requirements that it passes any sort of verification or vetting process whatsoever, let alone having other scientists read and critique it.
There is a very vocal movement these days that is trying to argue that we should do away with the traditional peer review process. Apparently that sometimes includes trying to redefine the very definition of "peer review" as the OP did.
"Peer review" is a very new thing. We're talking really about mid 20th century, with this format not really being common until the 70's. If anything, we're trying to return to the earlier version of the system[0].
The older review system didn't have these concepts of acceptance rates, novelty, and all of that. They would mostly publish if your work was void of errors, did not copy others, or was not essentially a small variation of another work and trying to take credit (I'd still consider this plagiarism).
We're fine with having conversations with reviewers, where we view this all as being "on the same team". A team trying to make the work the best work it can be. What we're not fine with is reviewers being set up as adversaries, who are looking for reasons to reject the works. That just creates perverse incentives.
Really, it's all about addressing the question:
I'd say that the primary goal is to communicate the a work. A secondary goal is to help works be visible. But this form of review cannot validate works and does a bad job at invalidating works. Nor can this form of review determine how impactful a work will be and we have a clear tendency to reject works that end up being highly influential.We're checking the alignment of our current system with our intended goals. Even if we're wrong, I think it is obtuse to be outright dismissive. Should you not even ask the question? I'd argue that not asking the question is anti-scientific. Science requires challenging the status quo. If we don't, innovation slows to a crawl. You cannot rely on those before you having gotten everything right. Use the advantage of hindsight. Trust, but you must also verify (challenge).
[0] We believe that this is the better formulation. Did it have problems? Yes. Can they all be solved? No. Does a global optima exist where all problems can be solved with a different system? Also no. So the belief is that good intentions trying to solve these problems only ended up creating worse ones. The cure was worse than the poison.
https://en.wikipedia.org/wiki/Scholarly_peer_review#Historyhttps://en.wikipedia.org/wiki/Altermagnetism
this is an extremely esoteric thing - no net magnetism, but has some possibly-useful properties of atomic spin... useful if you're doing some spintronics, that is. maybe.
"In condensed matter physics, altermagnetism is a type of persistent magnetic state in ideal crystals. Altermagnetic structures are collinear and crystal-symmetry compensated, resulting in zero net magnetisation. Unlike in an ordinary collinear antiferromagnet, another magnetic state with zero net magnetization, the electronic bands in an altermagnet are not Kramers degenerate, but instead depend on the wavevector in a spin-dependent way due to the intrinsic crystal symmetry connecting different magnetic sublattices."
> Sell them on the structure. You can talk about it with confidence. Keep it simple. A little something like this, John. Hey. Let me walk you through the Donnelly nut spacing and crack system rim-riding rip configuration. Using a field of half-C sprats, and brass-fitted nickel slits, our bracketed caps, and splay-flexed brace columns vent dampers to dampening hatch depths of one half meter from the damper crown to the spurve plinths. How? Well, we bolster twelve husk nuts to each girldle-jerry, while flex tandems press a task apparatus of ten vertically composited patch-hamplers. Then, pin-flam-fastened pan traps at both maiden-apexes of the jim-joist. A little something like that, Lakeman.
Real crystals have impurities so they are harder to reason about. An ideal crystal is just one where we pretend it's perfect.
> Altermagnetic structures are collinear
The structure is lined up, like the diagrams in the article
> and crystal-symmetry compensated, resulting in zero net magnetisation.
And the alternating lines are symmetrical so they "compensate" for each other and cancel out.
> the electronic bands in an altermagnet are not Kramers degenerate... (etc.)
The spin is different like in the diagram. Ok, that's a bit lame. Anyone else can give a simple but mostly accurate explanation?
Trecknobabble often makes more sense than Wikipedia, at least within the context of the show.
If you have enough knowledge to understand the article then you don't need it because you understand the field. If you don't it's impenetrable.
Perhaps I'm wrong: are there people out there who learnt something from a Wikipedia page on maths because you fell between the two?
?
Because each half of the net-zero magnet is arranged differently inside the crystal there's still a good way to measure what state it's in. Or something like that, I can see the pretty graph but I don't know what measurement you'd do.
https://en.wikipedia.org/wiki/Turbo_encabulator
It reminds me of the "new state of matter discovered" kinds of articles that are known to get clicks.
https://trends.google.com/trends/explore?date=all&q=%22new%2...
And also the "vantablack" fad:
https://trends.google.com/trends/explore?date=all&q=vantabla...
Basically, any potential discovery that can barely fit the "new kind of..." usually sounds more impressive than it really is.
This article is full of it.
I'm a programmer with very basic knowledge of magnetism, so, I can't say for sure what the discovery means, or if it is a discovery at all.
Perhaps you should have lead with this.
I also read the response, and it seems to be more general than you give credit. Allow me to interpret. "You probably shouldn't have strong opinions on things you don't have a strong expertise in."
What it tries to do is to provoke me to impostor my way into searching about magnetism and then repeating something that I read. And it failed at that. I'm not here to display any prowess in physics knowledge.
Have you considered you're not the audience?