The operating word at Thursday night’s annual Kristallnacht remembrance in Keene was “choose” — “We choose to remember,” community leaders said.
But just passively remembering, or going through the motions of paying respect to the millions killed during the Holocaust — which started in earnest with Kristallnacht, or “The Night of Broken Glass,” 81 years ago on Nov. 9, 1938 — will no longer suffice, each speaker stressed.
Silence, candlelight and more silence gave attendees ample time to think of instances where they could make a difference, or could have made a difference but fell short.
Driving home the parallels that illustrate why understanding genocides like the Holocaust remains crucial today, the story of one survivor anchored the evening.
Stephan Lewy, who was born in Berlin in 1925, was honored during the ceremony at The Colonial Theatre, but was unable to travel to Keene for the occasion.
Appearing via video recording, Lewy captivated the audience as he told of his journey and that of his fellow “Ritchie Boys.”
Lewy, who lived in New Hampshire for years and now resides with family in Buffalo, N.Y., was given the French Chevalier de la Legion d’Honneur (Legion of Honor) in 2014 at the Statehouse in Concord — the highest honor for any civilian or military official.
He was 13 and living at the Auerbach Orphanage for Jewish children in Germany during Kristallnacht, when Nazis carried out a near-nationwide burning and pillaging of synagogues, businesses and other Jewish institutions. Many Jewish people were also murdered during the violence in Germany, Austria and the Sudetenland, and thousands were sent to concentration camps.
That night, Nazis forced the orphanage’s children into a synagogue with a severed gas line, locking them inside, according to a bio about Lewy available through the Holocaust Resource Center of Buffalo. The children survived by breaking windows.
“The fresh air saved us,” Lewy recalled.
His father and stepmother scrambled to find safe passage to the United States, a process complicated when his father failed a medical exam required for a visa, and Lewy ended up in France via the Kindertransport, according to his bio.
Miraculously, he was reunited with his parents some three years later in the U.S. after leaving fascist Vichy France. He ended up returning to Europe, however, to fight the Nazis in World War II.
Due to immigration restrictions in America at the time, serving in the military was the best way for Lewy and many other refugees of the Holocaust to have a shot at a new life stateside.
Lewy was part of a special intelligence unit trained at Camp Ritchie in Maryland, hence the “Ritchie Boys” nickname. The group of predominately Jewish immigrants used their native German to collect well over half of all battlefield intelligence gathered by the Allied Forces during WWII, according to author Bruce Henderson in his book “Sons and Soldiers.”
Despite all the Nazis had done to their friends and extended families, Lewy and his unit refrained from using torture, according to a documentary excerpt shown Thursday, and instead extracted information from captured German soldiers on their way to saving countless lives.
After the war, Lewy started a family, and continues speaking out about his experience.
Several community leaders gave prepared remarks Thursday, culminating in the candle-lighting ceremony after Keene Mayor Kendall Lane, Fire Chief Mark Howard, Police Chief Steven Russo and others pledged to keep the Elm City welcoming and inclusive.
There was also a dance presentation from MoCo Arts students, with choreography by Tracy Grissom depicting a struggle between an in-group and an out-group, ultimately leading the audience to examine their own judgments and willingness to help someone in humiliation and despair.
Several basic facts from the 1930s rang eerily true when juxtaposed with the headwinds of American politics in 2019.
Tom White, coordinator of educational outreach at Keene State’s Cohen Center for Holocaust and Genocide Studies, drew parallels from simple recurring phrases such as “America First,” a slogan of the resurgent Ku Klux Klan.
Building on the ethos of the Cohen Center, White emphasized that genocides are a “process, not an event,” encouraging members of the audience to engage in introspection.
“Tonight, as we hear from the 1930s, we must wrestle with the question: Who do we want to be?” White said.
The urge for self interrogation was palpable at several points Thursday night, such as when Kati Preston came up to the microphone to light a candle, thanking a Hungarian woman who let her hide in her barn as a five-year-old, saving her from the Nazis.
“She was a simple peasant girl, and she hid me in her barn, and risked her own life to save me,” Preston said.
At the commemoration’s close, the audience was encouraged to leave the theater in silence.
When the cellphones came back on and the lobby cleared out, the candles remained lit.
This article has been altered to correct Tom White's title at the Cohen Center.
WASHINGTON — Election officials and social media firms already flummoxed by hackers, trolls and bots are bracing for a potentially more potent weapon of disinformation as the 2020 election approaches — doctored videos, known as “deepfakes,” that can be nearly impossible to detect as inauthentic.
In tech company board rooms, university labs and Pentagon briefings, technologists on the front lines of cybersecurity have sounded alarms over the threat, which they say has increased markedly as the technology to make convincing fakes has become increasingly available.
On Tuesday, leaders in artificial intelligence plan to unveil a tool to push back — it includes scanning software that the University of California, Berkeley has been developing in partnership with the U.S. military, which the industry will start providing to journalists and political operatives. The goal is to give the media and campaigns a chance to screen possible fake videos before they could throw an election into chaos.
The software is among the first significant effort to arm reporters and campaigns with tools to combat deepfakes. It faces formidable hurdles — both technical and political — and the developers say there’s no time to waste.
“We have to get serious about this,” said Hany Farid, a computer science professor at UC Berkeley working with a San Francisco nonprofit the AI Foundation to confront the threat of deepfakes.
“Given what we have already seen with interference, it does not take a stretch of imagination to see how easy it would be,” he added. “There is real power in video imagery.”
The worry that has gripped artificial intelligence innovators is of a fake video surfacing days before a major election that could throw a race into turmoil. Perhaps it would be grainy footage purporting to show President Donald Trump plotting to enrich himself off the presidency or Joe Biden hatching a deal with industry lobbyists or Sen. Elizabeth Warren mocking Native Americans.
The concern goes far beyond the small community of scientists.
“Not even six months ago this was something available only to people with some level of sophistication,” said Lindsay Gorman, a fellow at the Alliance for Securing Democracy, a bipartisan think tank. Now the software to make convincing fakes is “available to almost everyone,” he said.
“The deepfakes problem is expanding. There is no reason to think they won’t be used in this election.”
Facebook has launched its own initiative to speed up development of technology to spot doctored videos, and it is grappling over whether to remove or label deepfake propaganda when it emerges. Google has also been working with academics to generate troves of audio and video — real and fake — that can be used in the fight.
A new California law, AB 730, which takes effect in January, will make it illegal to distribute manipulated audio or video of a candidate that is maliciously deceptive and “would falsely appear to a reasonable person to be authentic.” There is a bipartisan effort in Congress to pass similar legislation.
Such bans, though, are legally precarious and could prove difficult to enforce in part because the line between a malicious fake and a satirical video protected under the First Amendment is a difficult one to draw.
The urgency around the videos comes as artificial-intelligence developers unveil demos of deepfakes that appear stunningly authentic.
The most well-known is a convincing video of former President Barack Obama reciting an innocuous passage he never said. The technology records another person saying the words, then grafts the lip movements and sound onto an image of the target, using algorithms and huge databases of real footage to seamlessly pass off the words as authentic.
The resulting videos pose a major problem for disinformation experts, who have found many potential solutions fall short. A company like Facebook, for example, might not be able to distinguish between a deepfake and a run-of-the mill political video with real footage that has been legitimately and obviously altered for effect — maybe to highlight the candidate, or to make a satirical point.
“The technology to detect deepfakes is lagging behind,” said Robert Chesney, a University of Texas law professor who researches the deceptive videos. “A huge amount of money has been put forward to try to crack this nut.”
The potential to weaponize the tools of artificial intelligence against American elections is unnerving to the AI Foundation, the nonprofit arm of a firm that develops and markets artificial-intelligence applications. Among the company’s works in progress are online clones of current-day business and spiritual leaders that could live on forever.
A recent demo for reporters featured a video chat led by an artificial recreation of Deepak Chopra, the mindfulness luminary. The avatar exchanged some pleasantries and then responded to questions about coping with work stress by guiding the group in a short meditation.
“With these big commercial opportunities come significant risks,” said Lars Buttler, CEO of the AI Foundation. “We are focusing half of our energy on prevention/detection, in anticipation of what could go wrong.”
And a lot can go wrong.
A recently altered video of House Speaker Nancy Pelosi, slowed to leave the false appearance that she was inebriated and disoriented, spread across the internet like a virus before fact checkers could set the record straight.
The video was more “cheap fake” than deepfake, using crude editing technology that could easily be detected. But it foreshadowed how unprepared voters are to process altered videos.
“There is a real danger that video manipulation tools are getting so good that normal people on the street won’t be able to tell anymore what happened,” said Buttler. “We face the risk that at some point we will no longer be able to agree what objective reality is.”
The foundation, which earlier this year enlisted Twitter co-founder Biz Stone as a co-director, is hoping the detection tools it is developing will help avert that. Media and political professionals given access to its “Reality Defender 2020” portal will be invited to run video they wish to check through two algorithms developed by scientists on the frontlines of artificial intelligence.
The UC Berkeley algorithm compares the subtle mannerisms of whatever politician is featured in the video in question with their actual mannerisms mined from an extensive trove of authentic video. The software can then assess whether two are in sync.
“Every person has a correlation between what they say and how they act otherwise,” said Buttler. “It is almost as unique as a fingerprint. If they are out of sync, it is a telltale sign. You can determine a mathematical correlation.” These differences are typically unnoticeable on their face, he said.
The videos are also run through a separate algorithm, developed in partnership with the FaceForensics project at the Technical University of Munich in Germany, which takes them apart pixel by pixel to look for signs they were altered.
Google has been working with the Munich project to create thousands of deepfake videos that are used to strengthen such algorithms, enabling them to learn how to detect patterns that emerge inside the machinery of videos that are altered, but are not visible to the viewer.
Whether the detection technology will be effective — and durable in the face of a threat that continues to evolve — remains to be seen. Those involved in combating deepfakes foresee a perpetual cat and mouse game, where architects of misinformation use detection technology to build more ever more evasive methods.
The plan with Reality Defender 2020 is to allow access only to legitimate media outlets and political campaigns. But that blueprint is fraught, as the technology risks being branded partisan if access is overly restricted and being compromised if made available to outlets and operatives that have murky affiliations.
And even if the detection technology turns out to be flawless, the reluctance of Facebook and other social media giants to take down even demonstrably false and misleading content threatens to limit its effectiveness.
That’s a major concern of Farid, the UC Berkeley scientist.
“I can do as much hard work as I can to detect deepfakes, but if at the end of day Facebook says, ‘We are OK with these,’ then we all have a problem,” he said. He is skeptical of the social media giant, even as it funds his lab’s detection work.“I told them it is not enough just to work with academics to develop this technology and put out press releases and blog posts about it. They have to do something with it.”
The Keene Downtown Group wants city councilors to consider making parking free in the mornings.
The group’s president, Mark Rebillard, and board member Roger Weinreich sent Mayor Kendall W. Lane and the City Council a letter Oct. 30 asking for the establishment of a free parking program from 8 to 11 a.m. on Main Street and in the downtown area.
Rebillard said Friday morning the proposal is “a matter of a starting point” to begin talking about the idea.
As of now, parking fees in Keene are enforced Monday through Saturday, from 8 a.m. to 5 p.m.
The City Council voted unanimously at its meeting Thursday to refer the request to its finance, organization and personnel committee, which is the standard first step in the council process. That committee convenes next Thursday, Nov. 14, at 6:30 p.m. in City Hall, and members of the public can provide input at that session.
The Keene Downtown Group is a nonprofit membership organization focused on the vitality of the Main Street area.
In the letter, the downtown group offers to partner with the city’s parking department to develop the parameters of the free parking program and later implement it. The goal is to increase downtown business and social activity by inviting people to visit Main Street in the mornings, according to the letter.
“At this time there appears to be an abundance of open parking spaces along Main Street in the morning,” the letter continues. “Parking revenue for this period has likely diminished and this is an excellent time to try some new ideas.”
According to the city budget, parking revenue has declined each of the past two fiscal years.
Under the group’s proposal, the parking department would determine the program’s feasibility, footprint and duration, and the initiative could be evaluated, adjusted or terminated if it’s unsustainable or doesn’t “achieve the desired results.”
On-street meters cost 85 cents per hour, and off-street meters, such as those in the Gilbo Avenue lot, are 35 cents per hour. These prices went up at the beginning of this year, the first increase since November 2015. Before that, parking fees hadn’t changed since 2002.
Meter fees, along with parking-space rentals and fines, go to the city’s parking fund, which is used to enforce regulations and maintain parking areas and facilities. In the 2019-20 operating budget released earlier this year, the city estimated the parking fund earned about $1.88 million in the prior year.